public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
* [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
@ 2024-08-01 11:35 kernel test robot
  2024-08-01 12:47 ` Gustavo A. R. Silva
  0 siblings, 1 reply; 10+ messages in thread
From: kernel test robot @ 2024-08-01 11:35 UTC (permalink / raw)
  To: Gustavo A. R. Silva; +Cc: llvm, oe-kbuild-all, Gustavo A. R. Silva, LKML

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202408011956.wscyBwq6-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
   In file included from drivers/infiniband/core/uverbs.h:46:
   In file included from include/rdma/ib_verbs.h:15:
   In file included from include/linux/ethtool.h:18:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:17:
   In file included from include/linux/bvec.h:10:
   In file included from include/linux/highmem.h:10:
   In file included from include/linux/mm.h:2228:
   include/linux/vmstat.h:514:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
     514 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
         |                               ~~~~~~~~~~~ ^ ~~~
   In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
   In file included from drivers/infiniband/core/uverbs.h:46:
   In file included from include/rdma/ib_verbs.h:15:
   In file included from include/linux/ethtool.h:18:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:17:
   In file included from include/linux/bvec.h:10:
   In file included from include/linux/highmem.h:12:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:14:
   In file included from arch/hexagon/include/asm/io.h:328:
   include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     548 |         val = __raw_readb(PCI_IOBASE + addr);
         |                           ~~~~~~~~~~ ^
   include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     561 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
      37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
         |                                                   ^
   In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
   In file included from drivers/infiniband/core/uverbs.h:46:
   In file included from include/rdma/ib_verbs.h:15:
   In file included from include/linux/ethtool.h:18:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:17:
   In file included from include/linux/bvec.h:10:
   In file included from include/linux/highmem.h:12:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:14:
   In file included from arch/hexagon/include/asm/io.h:328:
   include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     574 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
      35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
         |                                                   ^
   In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
   In file included from drivers/infiniband/core/uverbs.h:46:
   In file included from include/rdma/ib_verbs.h:15:
   In file included from include/linux/ethtool.h:18:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:17:
   In file included from include/linux/bvec.h:10:
   In file included from include/linux/highmem.h:12:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:14:
   In file included from arch/hexagon/include/asm/io.h:328:
   include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     585 |         __raw_writeb(value, PCI_IOBASE + addr);
         |                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     595 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     605 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
   In file included from drivers/infiniband/core/uverbs.h:49:
   In file included from include/rdma/uverbs_std_types.h:10:
>> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
     643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
         | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     644 |               "struct member likely outside of struct_group_tagged()");
         |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
      16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
         |                                 ^
   include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
      77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
         |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
      78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
         |                                                        ^~~~
   include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
     643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
         | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     644 |               "struct member likely outside of struct_group_tagged()");
         |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
      77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
         |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
      78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
         |                                                        ^~~~
   7 warnings and 1 error generated.


vim +643 include/rdma/uverbs_ioctl.h

   630	
   631	struct uverbs_attr_bundle {
   632		/* New members MUST be added within the struct_group() macro below. */
   633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
   634			struct ib_udata driver_udata;
   635			struct ib_udata ucore;
   636			struct ib_uverbs_file *ufile;
   637			struct ib_ucontext *context;
   638			struct ib_uobject *uobject;
   639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
   640		);
   641		struct uverbs_attr attrs[];
   642	};
 > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
   644		      "struct member likely outside of struct_group_tagged()");
   645	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 11:35 [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged() kernel test robot
@ 2024-08-01 12:47 ` Gustavo A. R. Silva
  2024-08-01 19:08   ` Nathan Chancellor
  0 siblings, 1 reply; 10+ messages in thread
From: Gustavo A. R. Silva @ 2024-08-01 12:47 UTC (permalink / raw)
  To: kernel test robot, Gustavo A. R. Silva; +Cc: llvm, oe-kbuild-all, LKML



On 01/08/24 05:35, kernel test robot wrote:
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
> head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
> config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
> compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)


Clang 20.0.0?? (thinkingface)

--
Gustavo

> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202408011956.wscyBwq6-lkp@intel.com/
> 
> All errors (new ones prefixed by >>):
> 
>     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
>     In file included from drivers/infiniband/core/uverbs.h:46:
>     In file included from include/rdma/ib_verbs.h:15:
>     In file included from include/linux/ethtool.h:18:
>     In file included from include/linux/if_ether.h:19:
>     In file included from include/linux/skbuff.h:17:
>     In file included from include/linux/bvec.h:10:
>     In file included from include/linux/highmem.h:10:
>     In file included from include/linux/mm.h:2228:
>     include/linux/vmstat.h:514:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
>       514 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
>           |                               ~~~~~~~~~~~ ^ ~~~
>     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
>     In file included from drivers/infiniband/core/uverbs.h:46:
>     In file included from include/rdma/ib_verbs.h:15:
>     In file included from include/linux/ethtool.h:18:
>     In file included from include/linux/if_ether.h:19:
>     In file included from include/linux/skbuff.h:17:
>     In file included from include/linux/bvec.h:10:
>     In file included from include/linux/highmem.h:12:
>     In file included from include/linux/hardirq.h:11:
>     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
>     In file included from include/asm-generic/hardirq.h:17:
>     In file included from include/linux/irq.h:20:
>     In file included from include/linux/io.h:14:
>     In file included from arch/hexagon/include/asm/io.h:328:
>     include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       548 |         val = __raw_readb(PCI_IOBASE + addr);
>           |                           ~~~~~~~~~~ ^
>     include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       561 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
>           |                                                         ~~~~~~~~~~ ^
>     include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
>        37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
>           |                                                   ^
>     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
>     In file included from drivers/infiniband/core/uverbs.h:46:
>     In file included from include/rdma/ib_verbs.h:15:
>     In file included from include/linux/ethtool.h:18:
>     In file included from include/linux/if_ether.h:19:
>     In file included from include/linux/skbuff.h:17:
>     In file included from include/linux/bvec.h:10:
>     In file included from include/linux/highmem.h:12:
>     In file included from include/linux/hardirq.h:11:
>     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
>     In file included from include/asm-generic/hardirq.h:17:
>     In file included from include/linux/irq.h:20:
>     In file included from include/linux/io.h:14:
>     In file included from arch/hexagon/include/asm/io.h:328:
>     include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       574 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
>           |                                                         ~~~~~~~~~~ ^
>     include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
>        35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
>           |                                                   ^
>     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
>     In file included from drivers/infiniband/core/uverbs.h:46:
>     In file included from include/rdma/ib_verbs.h:15:
>     In file included from include/linux/ethtool.h:18:
>     In file included from include/linux/if_ether.h:19:
>     In file included from include/linux/skbuff.h:17:
>     In file included from include/linux/bvec.h:10:
>     In file included from include/linux/highmem.h:12:
>     In file included from include/linux/hardirq.h:11:
>     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
>     In file included from include/asm-generic/hardirq.h:17:
>     In file included from include/linux/irq.h:20:
>     In file included from include/linux/io.h:14:
>     In file included from arch/hexagon/include/asm/io.h:328:
>     include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       585 |         __raw_writeb(value, PCI_IOBASE + addr);
>           |                             ~~~~~~~~~~ ^
>     include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       595 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
>           |                                                       ~~~~~~~~~~ ^
>     include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
>       605 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
>           |                                                       ~~~~~~~~~~ ^
>     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
>     In file included from drivers/infiniband/core/uverbs.h:49:
>     In file included from include/rdma/uverbs_std_types.h:10:
>>> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
>       643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>           | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>       644 |               "struct member likely outside of struct_group_tagged()");
>           |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
>        16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
>           |                                 ^
>     include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>        77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>           |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>        78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>           |                                                        ^~~~
>     include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
>       643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>           | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>       644 |               "struct member likely outside of struct_group_tagged()");
>           |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>        77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>           |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>        78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>           |                                                        ^~~~
>     7 warnings and 1 error generated.
> 
> 
> vim +643 include/rdma/uverbs_ioctl.h
> 
>     630	
>     631	struct uverbs_attr_bundle {
>     632		/* New members MUST be added within the struct_group() macro below. */
>     633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
>     634			struct ib_udata driver_udata;
>     635			struct ib_udata ucore;
>     636			struct ib_uverbs_file *ufile;
>     637			struct ib_ucontext *context;
>     638			struct ib_uobject *uobject;
>     639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
>     640		);
>     641		struct uverbs_attr attrs[];
>     642	};
>   > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>     644		      "struct member likely outside of struct_group_tagged()");
>     645	
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 12:47 ` Gustavo A. R. Silva
@ 2024-08-01 19:08   ` Nathan Chancellor
  2024-08-01 20:17     ` Gustavo A. R. Silva
  0 siblings, 1 reply; 10+ messages in thread
From: Nathan Chancellor @ 2024-08-01 19:08 UTC (permalink / raw)
  To: Gustavo A. R. Silva
  Cc: kernel test robot, Gustavo A. R. Silva, llvm, oe-kbuild-all, LKML

On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> 
> 
> On 01/08/24 05:35, kernel test robot wrote:
> > tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
> > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
> > config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
> > compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
> 
> 
> Clang 20.0.0?? (thinkingface)

Indeed, Clang 19 branched and main is now 20 :)

https://github.com/llvm/llvm-project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe

Cheers,
Nathan

> --
> Gustavo
> 
> > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/reproduce)
> > 
> > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > the same patch/commit), kindly add following tags
> > | Reported-by: kernel test robot <lkp@intel.com>
> > | Closes: https://lore.kernel.org/oe-kbuild-all/202408011956.wscyBwq6-lkp@intel.com/
> > 
> > All errors (new ones prefixed by >>):
> > 
> >     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
> >     In file included from drivers/infiniband/core/uverbs.h:46:
> >     In file included from include/rdma/ib_verbs.h:15:
> >     In file included from include/linux/ethtool.h:18:
> >     In file included from include/linux/if_ether.h:19:
> >     In file included from include/linux/skbuff.h:17:
> >     In file included from include/linux/bvec.h:10:
> >     In file included from include/linux/highmem.h:10:
> >     In file included from include/linux/mm.h:2228:
> >     include/linux/vmstat.h:514:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
> >       514 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
> >           |                               ~~~~~~~~~~~ ^ ~~~
> >     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
> >     In file included from drivers/infiniband/core/uverbs.h:46:
> >     In file included from include/rdma/ib_verbs.h:15:
> >     In file included from include/linux/ethtool.h:18:
> >     In file included from include/linux/if_ether.h:19:
> >     In file included from include/linux/skbuff.h:17:
> >     In file included from include/linux/bvec.h:10:
> >     In file included from include/linux/highmem.h:12:
> >     In file included from include/linux/hardirq.h:11:
> >     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
> >     In file included from include/asm-generic/hardirq.h:17:
> >     In file included from include/linux/irq.h:20:
> >     In file included from include/linux/io.h:14:
> >     In file included from arch/hexagon/include/asm/io.h:328:
> >     include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       548 |         val = __raw_readb(PCI_IOBASE + addr);
> >           |                           ~~~~~~~~~~ ^
> >     include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       561 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
> >           |                                                         ~~~~~~~~~~ ^
> >     include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
> >        37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
> >           |                                                   ^
> >     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
> >     In file included from drivers/infiniband/core/uverbs.h:46:
> >     In file included from include/rdma/ib_verbs.h:15:
> >     In file included from include/linux/ethtool.h:18:
> >     In file included from include/linux/if_ether.h:19:
> >     In file included from include/linux/skbuff.h:17:
> >     In file included from include/linux/bvec.h:10:
> >     In file included from include/linux/highmem.h:12:
> >     In file included from include/linux/hardirq.h:11:
> >     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
> >     In file included from include/asm-generic/hardirq.h:17:
> >     In file included from include/linux/irq.h:20:
> >     In file included from include/linux/io.h:14:
> >     In file included from arch/hexagon/include/asm/io.h:328:
> >     include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       574 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
> >           |                                                         ~~~~~~~~~~ ^
> >     include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
> >        35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
> >           |                                                   ^
> >     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
> >     In file included from drivers/infiniband/core/uverbs.h:46:
> >     In file included from include/rdma/ib_verbs.h:15:
> >     In file included from include/linux/ethtool.h:18:
> >     In file included from include/linux/if_ether.h:19:
> >     In file included from include/linux/skbuff.h:17:
> >     In file included from include/linux/bvec.h:10:
> >     In file included from include/linux/highmem.h:12:
> >     In file included from include/linux/hardirq.h:11:
> >     In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
> >     In file included from include/asm-generic/hardirq.h:17:
> >     In file included from include/linux/irq.h:20:
> >     In file included from include/linux/io.h:14:
> >     In file included from arch/hexagon/include/asm/io.h:328:
> >     include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       585 |         __raw_writeb(value, PCI_IOBASE + addr);
> >           |                             ~~~~~~~~~~ ^
> >     include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       595 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
> >           |                                                       ~~~~~~~~~~ ^
> >     include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
> >       605 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
> >           |                                                       ~~~~~~~~~~ ^
> >     In file included from drivers/infiniband/core/ib_core_uverbs.c:8:
> >     In file included from drivers/infiniband/core/uverbs.h:49:
> >     In file included from include/rdma/uverbs_std_types.h:10:
> > > > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
> >       643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> >           | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >       644 |               "struct member likely outside of struct_group_tagged()");
> >           |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >     include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
> >        16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
> >           |                                 ^
> >     include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> >        77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> >           |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> >     include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> >        78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> >           |                                                        ^~~~
> >     include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
> >       643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> >           | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >       644 |               "struct member likely outside of struct_group_tagged()");
> >           |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >     include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> >        77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> >           |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> >     include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> >        78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> >           |                                                        ^~~~
> >     7 warnings and 1 error generated.
> > 
> > 
> > vim +643 include/rdma/uverbs_ioctl.h
> > 
> >     630	
> >     631	struct uverbs_attr_bundle {
> >     632		/* New members MUST be added within the struct_group() macro below. */
> >     633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> >     634			struct ib_udata driver_udata;
> >     635			struct ib_udata ucore;
> >     636			struct ib_uverbs_file *ufile;
> >     637			struct ib_ucontext *context;
> >     638			struct ib_uobject *uobject;
> >     639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
> >     640		);
> >     641		struct uverbs_attr attrs[];
> >     642	};
> >   > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> >     644		      "struct member likely outside of struct_group_tagged()");
> >     645	
> > 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 19:08   ` Nathan Chancellor
@ 2024-08-01 20:17     ` Gustavo A. R. Silva
  2024-08-01 22:14       ` Nathan Chancellor
  0 siblings, 1 reply; 10+ messages in thread
From: Gustavo A. R. Silva @ 2024-08-01 20:17 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: kernel test robot, Gustavo A. R. Silva, llvm, oe-kbuild-all, LKML



On 01/08/24 13:08, Nathan Chancellor wrote:
> On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
>>
>>
>> On 01/08/24 05:35, kernel test robot wrote:
>>> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
>>> head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
>>> commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
>>> config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
>>> compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
>>
>>
>> Clang 20.0.0?? (thinkingface)
> 
> Indeed, Clang 19 branched and main is now 20 :)
> 
> https://github.com/llvm/llvm-project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe

Yeah, but is that a stable release?

BTW, I don't see GCC reporting the same problem below:

>>>>> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
>>>        643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>            | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>        644 |               "struct member likely outside of struct_group_tagged()");
>>>            |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>      include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
>>>         16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
>>>            |                                 ^
>>>      include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>>>         77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>>>            |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>      include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>>>         78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>>>            |                                                        ^~~~
>>>      include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
>>>        643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>            | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>        644 |               "struct member likely outside of struct_group_tagged()");
>>>            |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>      include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>>>         77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>>>            |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>      include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>>>         78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>>>            |                                                        ^~~~
>>>      7 warnings and 1 error generated.
>>>
>>>
>>> vim +643 include/rdma/uverbs_ioctl.h
>>>
>>>      630	
>>>      631	struct uverbs_attr_bundle {
>>>      632		/* New members MUST be added within the struct_group() macro below. */
>>>      633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
>>>      634			struct ib_udata driver_udata;
>>>      635			struct ib_udata ucore;
>>>      636			struct ib_uverbs_file *ufile;
>>>      637			struct ib_ucontext *context;
>>>      638			struct ib_uobject *uobject;
>>>      639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
>>>      640		);
>>>      641		struct uverbs_attr attrs[];
>>>      642	};
>>>    > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>      644		      "struct member likely outside of struct_group_tagged()");
>>>      645	
>>>
>>

Thanks
--
Gustavo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 20:17     ` Gustavo A. R. Silva
@ 2024-08-01 22:14       ` Nathan Chancellor
  2024-08-01 22:35         ` Gustavo A. R. Silva
  0 siblings, 1 reply; 10+ messages in thread
From: Nathan Chancellor @ 2024-08-01 22:14 UTC (permalink / raw)
  To: Gustavo A. R. Silva
  Cc: kernel test robot, Gustavo A. R. Silva, llvm, oe-kbuild-all, LKML,
	Brian Cain, linux-hexagon

On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
> 
> 
> On 01/08/24 13:08, Nathan Chancellor wrote:
> > On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> > > 
> > > 
> > > On 01/08/24 05:35, kernel test robot wrote:
> > > > tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
> > > > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > > > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
> > > > config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
> > > > compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
> > > 
> > > 
> > > Clang 20.0.0?? (thinkingface)
> > 
> > Indeed, Clang 19 branched and main is now 20 :)
> > 
> > https://github.com/llvm/llvm-project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
> 
> Yeah, but is that a stable release?

No, but the Intel folks have tested tip of tree LLVM against the kernel
for us for a few years now to try and catch issues such as this.

> BTW, I don't see GCC reporting the same problem below:

Hexagon does not have a GCC backend anymore so it is not going to be
possible to do an exact A/B comparison with this configuration but...

> > > > > > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
> > > >        643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > >            | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >        644 |               "struct member likely outside of struct_group_tagged()");
> > > >            |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >      include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
> > > >         16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
> > > >            |                                 ^
> > > >      include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> > > >         77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> > > >            |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >      include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> > > >         78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> > > >            |                                                        ^~~~
> > > >      include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'

This seems to give some indication that perhaps there may be some
architecture specific here with padding maybe? I seem to recall ARM OABI
having something similar. Adding the Hexagon folks/list to get some more
clarification. Full warning and context:

https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/

The problematic section preprocessed since sometimes the macros
obfuscate things:

struct uverbs_attr_bundle {
        union {
                struct {
                        struct ib_udata driver_udata;
                        struct ib_udata ucore;
                        struct ib_uverbs_file *ufile;
                        struct ib_ucontext *context;
                        struct ib_uobject *uobject;
                        unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
                                                     ((sizeof(long) * 8)) - 1) /
                                                    ((sizeof(long) * 8)))];
                };
                struct uverbs_attr_bundle_hdr {
                        struct ib_udata driver_udata;
                        struct ib_udata ucore;
                        struct ib_uverbs_file *ufile;
                        struct ib_ucontext *context;
                        struct ib_uobject *uobject;
                        unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
                                                     ((sizeof(long) * 8)) - 1) /
                                                    ((sizeof(long) * 8)))];
                } hdr;
        };

        struct uverbs_attr attrs[];
};
_Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
                       sizeof(struct uverbs_attr_bundle_hdr),
               "struct member likely outside of struct_group_tagged()");

FWIW, I see this with all versions of Clang that the kernel supports
with this configuration.

Cheers,
Nathan

> > > >        643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > >            | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >        644 |               "struct member likely outside of struct_group_tagged()");
> > > >            |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >      include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> > > >         77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> > > >            |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >      include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> > > >         78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> > > >            |                                                        ^~~~
> > > >      7 warnings and 1 error generated.
> > > > 
> > > > 
> > > > vim +643 include/rdma/uverbs_ioctl.h
> > > > 
> > > >      630	
> > > >      631	struct uverbs_attr_bundle {
> > > >      632		/* New members MUST be added within the struct_group() macro below. */
> > > >      633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> > > >      634			struct ib_udata driver_udata;
> > > >      635			struct ib_udata ucore;
> > > >      636			struct ib_uverbs_file *ufile;
> > > >      637			struct ib_ucontext *context;
> > > >      638			struct ib_uobject *uobject;
> > > >      639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
> > > >      640		);
> > > >      641		struct uverbs_attr attrs[];
> > > >      642	};
> > > >    > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > >      644		      "struct member likely outside of struct_group_tagged()");
> > > >      645	
> > > > 
> > > 
> 
> Thanks
> --
> Gustavo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 22:14       ` Nathan Chancellor
@ 2024-08-01 22:35         ` Gustavo A. R. Silva
  2024-08-02 22:19           ` Nathan Chancellor
  0 siblings, 1 reply; 10+ messages in thread
From: Gustavo A. R. Silva @ 2024-08-01 22:35 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: kernel test robot, Gustavo A. R. Silva, llvm, oe-kbuild-all, LKML,
	Brian Cain, linux-hexagon



On 01/08/24 16:14, Nathan Chancellor wrote:
> On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
>>
>>
>> On 01/08/24 13:08, Nathan Chancellor wrote:
>>> On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
>>>>
>>>>
>>>> On 01/08/24 05:35, kernel test robot wrote:
>>>>> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
>>>>> head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
>>>>> commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
>>>>> config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
>>>>> compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
>>>>
>>>>
>>>> Clang 20.0.0?? (thinkingface)
>>>
>>> Indeed, Clang 19 branched and main is now 20 :)
>>>
>>> https://github.com/llvm/llvm-project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
>>
>> Yeah, but is that a stable release?
> 
> No, but the Intel folks have tested tip of tree LLVM against the kernel
> for us for a few years now to try and catch issues such as this.

Oh, I see, fine. :)

> 
>> BTW, I don't see GCC reporting the same problem below:
> 
> Hexagon does not have a GCC backend anymore so it is not going to be
> possible to do an exact A/B comparison with this configuration but...
> 
>>>>>>> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
>>>>>         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>>>             | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>         644 |               "struct member likely outside of struct_group_tagged()");
>>>>>             |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>       include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
>>>>>          16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
>>>>>             |                                 ^
>>>>>       include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>>>>>          77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>>>>>             |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>       include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>>>>>          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>>>>>             |                                                        ^~~~
>>>>>       include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
> 
> This seems to give some indication that perhaps there may be some
> architecture specific here with padding maybe? I seem to recall ARM OABI
> having something similar. Adding the Hexagon folks/list to get some more
> clarification. Full warning and context:
> 
> https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/
> 
> The problematic section preprocessed since sometimes the macros
> obfuscate things:
> 
> struct uverbs_attr_bundle {
>          union {
>                  struct {
>                          struct ib_udata driver_udata;
>                          struct ib_udata ucore;
>                          struct ib_uverbs_file *ufile;
>                          struct ib_ucontext *context;
>                          struct ib_uobject *uobject;
>                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
>                                                       ((sizeof(long) * 8)) - 1) /
>                                                      ((sizeof(long) * 8)))];
>                  };
>                  struct uverbs_attr_bundle_hdr {
>                          struct ib_udata driver_udata;
>                          struct ib_udata ucore;
>                          struct ib_uverbs_file *ufile;
>                          struct ib_ucontext *context;
>                          struct ib_uobject *uobject;
>                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
>                                                       ((sizeof(long) * 8)) - 1) /
>                                                      ((sizeof(long) * 8)))];
>                  } hdr;
>          };
> 
>          struct uverbs_attr attrs[];
> };
> _Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
>                         sizeof(struct uverbs_attr_bundle_hdr),
>                 "struct member likely outside of struct_group_tagged()");
> 
> FWIW, I see this with all versions of Clang that the kernel supports
> with this configuration.

I don't have access to a Clang compiler right now; I wonder if you could
help me get the output of this command:

pahole -C uverbs_attr_bundle drivers/infiniband/core/rdma_core.o

Thanks in advance!
-Gustavo

> 
> Cheers,
> Nathan
> 
>>>>>         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>>>             | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>         644 |               "struct member likely outside of struct_group_tagged()");
>>>>>             |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>       include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
>>>>>          77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
>>>>>             |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>       include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
>>>>>          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
>>>>>             |                                                        ^~~~
>>>>>       7 warnings and 1 error generated.
>>>>>
>>>>>
>>>>> vim +643 include/rdma/uverbs_ioctl.h
>>>>>
>>>>>       630	
>>>>>       631	struct uverbs_attr_bundle {
>>>>>       632		/* New members MUST be added within the struct_group() macro below. */
>>>>>       633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
>>>>>       634			struct ib_udata driver_udata;
>>>>>       635			struct ib_udata ucore;
>>>>>       636			struct ib_uverbs_file *ufile;
>>>>>       637			struct ib_ucontext *context;
>>>>>       638			struct ib_uobject *uobject;
>>>>>       639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
>>>>>       640		);
>>>>>       641		struct uverbs_attr attrs[];
>>>>>       642	};
>>>>>     > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
>>>>>       644		      "struct member likely outside of struct_group_tagged()");
>>>>>       645	
>>>>>
>>>>
>>
>> Thanks
>> --
>> Gustavo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  2024-08-01 22:35         ` Gustavo A. R. Silva
@ 2024-08-02 22:19           ` Nathan Chancellor
  2024-08-06 15:36             ` [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb Brian Cain
  0 siblings, 1 reply; 10+ messages in thread
From: Nathan Chancellor @ 2024-08-02 22:19 UTC (permalink / raw)
  To: Gustavo A. R. Silva
  Cc: kernel test robot, Gustavo A. R. Silva, llvm, oe-kbuild-all, LKML,
	Brian Cain, linux-hexagon

On Thu, Aug 01, 2024 at 04:35:59PM -0600, Gustavo A. R. Silva wrote:
> 
> 
> On 01/08/24 16:14, Nathan Chancellor wrote:
> > On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
> > > 
> > > 
> > > On 01/08/24 13:08, Nathan Chancellor wrote:
> > > > On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> > > > > 
> > > > > 
> > > > > On 01/08/24 05:35, kernel test robot wrote:
> > > > > > tree:   https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git testing/wfamnae-next20240729-cbc-2
> > > > > > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > > > > > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18] RDMA/uverbs: Use static_assert() to check struct sizes
> > > > > > config: hexagon-randconfig-001-20240801 (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-lkp@intel.com/config)
> > > > > > compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 430b90f04533b099d788db2668176038be38c53b)
> > > > > 
> > > > > 
> > > > > Clang 20.0.0?? (thinkingface)
> > > > 
> > > > Indeed, Clang 19 branched and main is now 20 :)
> > > > 
> > > > https://github.com/llvm/llvm-project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
> > > 
> > > Yeah, but is that a stable release?
> > 
> > No, but the Intel folks have tested tip of tree LLVM against the kernel
> > for us for a few years now to try and catch issues such as this.
> 
> Oh, I see, fine. :)
> 
> > 
> > > BTW, I don't see GCC reporting the same problem below:
> > 
> > Hexagon does not have a GCC backend anymore so it is not going to be
> > possible to do an exact A/B comparison with this configuration but...
> > 
> > > > > > > > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
> > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > > > >             | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >         644 |               "struct member likely outside of struct_group_tagged()");
> > > > > >             |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >       include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
> > > > > >          16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
> > > > > >             |                                 ^
> > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> > > > > >             |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> > > > > >             |                                                        ^~~~
> > > > > >       include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
> > 
> > This seems to give some indication that perhaps there may be some
> > architecture specific here with padding maybe? I seem to recall ARM OABI
> > having something similar. Adding the Hexagon folks/list to get some more
> > clarification. Full warning and context:
> > 
> > https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/
> > 
> > The problematic section preprocessed since sometimes the macros
> > obfuscate things:
> > 
> > struct uverbs_attr_bundle {
> >          union {
> >                  struct {
> >                          struct ib_udata driver_udata;
> >                          struct ib_udata ucore;
> >                          struct ib_uverbs_file *ufile;
> >                          struct ib_ucontext *context;
> >                          struct ib_uobject *uobject;
> >                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
> >                                                       ((sizeof(long) * 8)) - 1) /
> >                                                      ((sizeof(long) * 8)))];
> >                  };
> >                  struct uverbs_attr_bundle_hdr {
> >                          struct ib_udata driver_udata;
> >                          struct ib_udata ucore;
> >                          struct ib_uverbs_file *ufile;
> >                          struct ib_ucontext *context;
> >                          struct ib_uobject *uobject;
> >                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) +
> >                                                       ((sizeof(long) * 8)) - 1) /
> >                                                      ((sizeof(long) * 8)))];
> >                  } hdr;
> >          };
> > 
> >          struct uverbs_attr attrs[];
> > };
> > _Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> >                         sizeof(struct uverbs_attr_bundle_hdr),
> >                 "struct member likely outside of struct_group_tagged()");
> > 
> > FWIW, I see this with all versions of Clang that the kernel supports
> > with this configuration.
> 
> I don't have access to a Clang compiler right now; I wonder if you could
> help me get the output of this command:
> 
> pahole -C uverbs_attr_bundle drivers/infiniband/core/rdma_core.o

We disabled CONFIG_DEBUG_INFO_BTF for Hexagon because elfutils does not
support Hexagon relocations but this is built-in for this configuration
so I removed that limitation and ended up with:

$ pahole -C uverbs_attr_bundle vmlinux
struct uverbs_attr_bundle {
        union {
                struct {
                        struct ib_udata driver_udata;    /*     0    16 */
                        struct ib_udata ucore;           /*    16    16 */
                        struct ib_uverbs_file * ufile;   /*    32     4 */
                        struct ib_ucontext * context;    /*    36     4 */
                        struct ib_uobject * uobject;     /*    40     4 */
                        unsigned long attr_present[2];   /*    44     8 */
                };                                       /*     0    52 */
                struct uverbs_attr_bundle_hdr hdr;       /*     0    52 */
        };                                               /*     0    52 */

        /* XXX 4 bytes hole, try to pack */
        union {
                struct {
                        struct ib_udata    driver_udata;         /*     0    16 */
                        struct ib_udata    ucore;                /*    16    16 */
                        struct ib_uverbs_file * ufile;           /*    32     4 */
                        struct ib_ucontext * context;            /*    36     4 */
                        struct ib_uobject * uobject;             /*    40     4 */
                        unsigned long      attr_present[2];      /*    44     8 */
                };                                               /*     0    52 */
                struct uverbs_attr_bundle_hdr hdr;               /*     0    52 */
        };


        struct uverbs_attr         attrs[];              /*    56     0 */

        /* size: 56, cachelines: 1, members: 2 */
        /* sum members: 52, holes: 1, sum holes: 4 */
        /* last cacheline: 56 bytes */
};

If you want any other information or want me to test anything, I am more
than happy to do so.

Cheers,
Nathan

> > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > > > >             | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >         644 |               "struct member likely outside of struct_group_tagged()");
> > > > > >             |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
> > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> > > > > >             |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
> > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> > > > > >             |                                                        ^~~~
> > > > > >       7 warnings and 1 error generated.
> > > > > > 
> > > > > > 
> > > > > > vim +643 include/rdma/uverbs_ioctl.h
> > > > > > 
> > > > > >       630	
> > > > > >       631	struct uverbs_attr_bundle {
> > > > > >       632		/* New members MUST be added within the struct_group() macro below. */
> > > > > >       633		struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> > > > > >       634			struct ib_udata driver_udata;
> > > > > >       635			struct ib_udata ucore;
> > > > > >       636			struct ib_uverbs_file *ufile;
> > > > > >       637			struct ib_ucontext *context;
> > > > > >       638			struct ib_uobject *uobject;
> > > > > >       639			DECLARE_BITMAP(attr_present, UVERBS_API_ATTR_BKEY_LEN);
> > > > > >       640		);
> > > > > >       641		struct uverbs_attr attrs[];
> > > > > >       642	};
> > > > > >     > 643	static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
> > > > > >       644		      "struct member likely outside of struct_group_tagged()");
> > > > > >       645	
> > > > > > 
> > > > > 
> > > 
> > > Thanks
> > > --
> > > Gustavo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
  2024-08-02 22:19           ` Nathan Chancellor
@ 2024-08-06 15:36             ` Brian Cain
  2024-08-14  3:27               ` Brian Cain
  0 siblings, 1 reply; 10+ messages in thread
From: Brian Cain @ 2024-08-06 15:36 UTC (permalink / raw)
  To: Nathan Chancellor, Gustavo A. R. Silva
  Cc: kernel test robot, Gustavo A. R. Silva, llvm@lists.linux.dev,
	oe-kbuild-all@lists.linux.dev, LKML,
	linux-hexagon@vger.kernel.org, Sid Manning, Sundeep Kushwaha



> -----Original Message-----
> From: Nathan Chancellor <nathan@kernel.org>
> Sent: Friday, August 2, 2024 5:20 PM
> To: Gustavo A. R. Silva <gustavo@embeddedor.com>
> Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva
> <gustavoars@kernel.org>; llvm@lists.linux.dev; oe-kbuild-all@lists.linux.dev;
> LKML <linux-kernel@vger.kernel.org>; Brian Cain <bcain@quicinc.com>; linux-
> hexagon@vger.kernel.org
> Subject: Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18]
> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to
> requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
> 
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
> 
> On Thu, Aug 01, 2024 at 04:35:59PM -0600, Gustavo A. R. Silva wrote:
> >
> >
> > On 01/08/24 16:14, Nathan Chancellor wrote:
> > > On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
> > > >
> > > >
> > > > On 01/08/24 13:08, Nathan Chancellor wrote:
> > > > > On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> > > > > >
> > > > > >
> > > > > > On 01/08/24 05:35, kernel test robot wrote:
> > > > > > > tree:
> https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git
> testing/wfamnae-next20240729-cbc-2
> > > > > > > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > > > > > > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18]
> RDMA/uverbs: Use static_assert() to check struct sizes
> > > > > > > config: hexagon-randconfig-001-20240801
> (https://download.01.org/0day-ci/archive/20240801/202408011956.wscyBwq6-
> lkp@intel.com/config)
> > > > > > > compiler: clang version 20.0.0git (https://github.com/llvm/llvm-
> project 430b90f04533b099d788db2668176038be38c53b)
> > > > > >
> > > > > >
> > > > > > Clang 20.0.0?? (thinkingface)
> > > > >
> > > > > Indeed, Clang 19 branched and main is now 20 :)
> > > > >
> > > > > https://github.com/llvm/llvm-
> project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
> > > >
> > > > Yeah, but is that a stable release?
> > >
> > > No, but the Intel folks have tested tip of tree LLVM against the kernel
> > > for us for a few years now to try and catch issues such as this.
> >
> > Oh, I see, fine. :)
> >
> > >
> > > > BTW, I don't see GCC reporting the same problem below:
> > >
> > > Hexagon does not have a GCC backend anymore so it is not going to be
> > > possible to do an exact A/B comparison with this configuration but...
> > >
> > > > > > > > > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed
> due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of
> struct_group_tagged()
> > > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > >             |
> ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >         644 |               "struct member likely outside of
> struct_group_tagged()");
> > > > > > >             |
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >       include/linux/stddef.h:16:32: note: expanded from macro
> 'offsetof'
> > > > > > >          16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE,
> MEMBER)
> > > > > > >             |                                 ^
> > > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro
> 'static_assert'
> > > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr,
> ##__VA_ARGS__, #expr)
> > > > > > >             |
> ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro
> '__static_assert'
> > > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr,
> msg)
> > > > > > >             |                                                        ^~~~
> > > > > > >       include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates
> to '56 == 52'
> > >
> > > This seems to give some indication that perhaps there may be some
> > > architecture specific here with padding maybe? I seem to recall ARM OABI
> > > having something similar. Adding the Hexagon folks/list to get some more
> > > clarification. Full warning and context:
> > >
> > > https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/
> > >

There might be hexagon-specific padding requirements, but not ones that I've stumbled across before.  I've added Sundeep from the compiler team who may be able to help.

> > > The problematic section preprocessed since sometimes the macros
> > > obfuscate things:
> > >
> > > struct uverbs_attr_bundle {
> > >          union {
> > >                  struct {
> > >                          struct ib_udata driver_udata;
> > >                          struct ib_udata ucore;
> > >                          struct ib_uverbs_file *ufile;
> > >                          struct ib_ucontext *context;
> > >                          struct ib_uobject *uobject;
> > >                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> +
> > >                                                       ((sizeof(long) * 8)) - 1) /
> > >                                                      ((sizeof(long) * 8)))];
> > >                  };
> > >                  struct uverbs_attr_bundle_hdr {
> > >                          struct ib_udata driver_udata;
> > >                          struct ib_udata ucore;
> > >                          struct ib_uverbs_file *ufile;
> > >                          struct ib_ucontext *context;
> > >                          struct ib_uobject *uobject;
> > >                          unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> +
> > >                                                       ((sizeof(long) * 8)) - 1) /
> > >                                                      ((sizeof(long) * 8)))];
> > >                  } hdr;
> > >          };
> > >
> > >          struct uverbs_attr attrs[];
> > > };
> > > _Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> > >                         sizeof(struct uverbs_attr_bundle_hdr),
> > >                 "struct member likely outside of struct_group_tagged()");
> > >
> > > FWIW, I see this with all versions of Clang that the kernel supports
> > > with this configuration.
> >
> > I don't have access to a Clang compiler right now; I wonder if you could
> > help me get the output of this command:
> >
> > pahole -C uverbs_attr_bundle drivers/infiniband/core/rdma_core.o
> 
> We disabled CONFIG_DEBUG_INFO_BTF for Hexagon because elfutils does not
> support Hexagon relocations but this is built-in for this configuration
> so I removed that limitation and ended up with:
> 
> $ pahole -C uverbs_attr_bundle vmlinux
> struct uverbs_attr_bundle {
>         union {
>                 struct {
>                         struct ib_udata driver_udata;    /*     0    16 */
>                         struct ib_udata ucore;           /*    16    16 */
>                         struct ib_uverbs_file * ufile;   /*    32     4 */
>                         struct ib_ucontext * context;    /*    36     4 */
>                         struct ib_uobject * uobject;     /*    40     4 */
>                         unsigned long attr_present[2];   /*    44     8 */
>                 };                                       /*     0    52 */
>                 struct uverbs_attr_bundle_hdr hdr;       /*     0    52 */
>         };                                               /*     0    52 */
> 
>         /* XXX 4 bytes hole, try to pack */
>         union {
>                 struct {
>                         struct ib_udata    driver_udata;         /*     0    16 */
>                         struct ib_udata    ucore;                /*    16    16 */
>                         struct ib_uverbs_file * ufile;           /*    32     4 */
>                         struct ib_ucontext * context;            /*    36     4 */
>                         struct ib_uobject * uobject;             /*    40     4 */
>                         unsigned long      attr_present[2];      /*    44     8 */
>                 };                                               /*     0    52 */
>                 struct uverbs_attr_bundle_hdr hdr;               /*     0    52 */
>         };
> 
> 
>         struct uverbs_attr         attrs[];              /*    56     0 */
> 
>         /* size: 56, cachelines: 1, members: 2 */
>         /* sum members: 52, holes: 1, sum holes: 4 */
>         /* last cacheline: 56 bytes */
> };
> 
> If you want any other information or want me to test anything, I am more
> than happy to do so.
> 
> Cheers,
> Nathan
> 
> > > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > >             |
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >         644 |               "struct member likely outside of
> struct_group_tagged()");
> > > > > > >             |
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro
> 'static_assert'
> > > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr,
> ##__VA_ARGS__, #expr)
> > > > > > >             |
> ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro
> '__static_assert'
> > > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr,
> msg)
> > > > > > >             |                                                        ^~~~
> > > > > > >       7 warnings and 1 error generated.
> > > > > > >
> > > > > > >
> > > > > > > vim +643 include/rdma/uverbs_ioctl.h
> > > > > > >
> > > > > > >       630
> > > > > > >       631   struct uverbs_attr_bundle {
> > > > > > >       632           /* New members MUST be added within the
> struct_group() macro below. */
> > > > > > >       633           struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> > > > > > >       634                   struct ib_udata driver_udata;
> > > > > > >       635                   struct ib_udata ucore;
> > > > > > >       636                   struct ib_uverbs_file *ufile;
> > > > > > >       637                   struct ib_ucontext *context;
> > > > > > >       638                   struct ib_uobject *uobject;
> > > > > > >       639                   DECLARE_BITMAP(attr_present,
> UVERBS_API_ATTR_BKEY_LEN);
> > > > > > >       640           );
> > > > > > >       641           struct uverbs_attr attrs[];
> > > > > > >       642   };
> > > > > > >     > 643   static_assert(offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > >       644                 "struct member likely outside of
> struct_group_tagged()");
> > > > > > >       645
> > > > > > >
> > > > > >
> > > >
> > > > Thanks
> > > > --
> > > > Gustavo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
  2024-08-06 15:36             ` [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb Brian Cain
@ 2024-08-14  3:27               ` Brian Cain
  2024-08-14 16:30                 ` Steven Walk
  0 siblings, 1 reply; 10+ messages in thread
From: Brian Cain @ 2024-08-14  3:27 UTC (permalink / raw)
  To: Brian Cain, Nathan Chancellor, Gustavo A. R. Silva,
	Steven Walk (QUIC)
  Cc: kernel test robot, Gustavo A. R. Silva, llvm@lists.linux.dev,
	oe-kbuild-all@lists.linux.dev, LKML,
	linux-hexagon@vger.kernel.org, Sid Manning, Sundeep Kushwaha

[-- Attachment #1: Type: text/plain, Size: 18072 bytes --]



> -----Original Message-----
> From: Brian Cain <bcain@quicinc.com>
> Sent: Tuesday, August 6, 2024 10:36 AM
> To: Nathan Chancellor <nathan@kernel.org>; Gustavo A. R. Silva
> <gustavo@embeddedor.com>
> Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva
> <gustavoars@kernel.org>; llvm@lists.linux.dev; oe-kbuild-all@lists.linux.dev;
> LKML <linux-kernel@vger.kernel.org>; linux-hexagon@vger.kernel.org; Sid
> Manning <sidneym@quicinc.com>; Sundeep Kushwaha
> <sundeepk@quicinc.com>
> Subject: RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18]
> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to
> requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
> 
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
> 
> > -----Original Message-----
> > From: Nathan Chancellor <nathan@kernel.org>
> > Sent: Friday, August 2, 2024 5:20 PM
> > To: Gustavo A. R. Silva <gustavo@embeddedor.com>
> > Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva
> > <gustavoars@kernel.org>; llvm@lists.linux.dev; oe-kbuild-all@lists.linux.dev;
> > LKML <linux-kernel@vger.kernel.org>; Brian Cain <bcain@quicinc.com>; linux-
> > hexagon@vger.kernel.org
> > Subject: Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18]
> > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to
> > requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> > sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
> >
> > WARNING: This email originated from outside of Qualcomm. Please be wary
> of
> > any links or attachments, and do not enable macros.
> >
> > On Thu, Aug 01, 2024 at 04:35:59PM -0600, Gustavo A. R. Silva wrote:
> > >
> > >
> > > On 01/08/24 16:14, Nathan Chancellor wrote:
> > > > On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
> > > > >
> > > > >
> > > > > On 01/08/24 13:08, Nathan Chancellor wrote:
> > > > > > On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> > > > > > >
> > > > > > >
> > > > > > > On 01/08/24 05:35, kernel test robot wrote:
> > > > > > > > tree:
> > https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git
> > testing/wfamnae-next20240729-cbc-2
> > > > > > > > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > > > > > > > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18]
> > RDMA/uverbs: Use static_assert() to check struct sizes
> > > > > > > > config: hexagon-randconfig-001-20240801
> > (https://download.01.org/0day-
> ci/archive/20240801/202408011956.wscyBwq6-
> > lkp@intel.com/config)
> > > > > > > > compiler: clang version 20.0.0git (https://github.com/llvm/llvm-
> > project 430b90f04533b099d788db2668176038be38c53b)
> > > > > > >
> > > > > > >
> > > > > > > Clang 20.0.0?? (thinkingface)
> > > > > >
> > > > > > Indeed, Clang 19 branched and main is now 20 :)
> > > > > >
> > > > > > https://github.com/llvm/llvm-
> > project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
> > > > >
> > > > > Yeah, but is that a stable release?
> > > >
> > > > No, but the Intel folks have tested tip of tree LLVM against the kernel
> > > > for us for a few years now to try and catch issues such as this.
> > >
> > > Oh, I see, fine. :)
> > >
> > > >
> > > > > BTW, I don't see GCC reporting the same problem below:
> > > >
> > > > Hexagon does not have a GCC backend anymore so it is not going to be
> > > > possible to do an exact A/B comparison with this configuration but...
> > > >
> > > > > > > > > > include/rdma/uverbs_ioctl.h:643:15: error: static assertion
> failed
> > due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> > sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of
> > struct_group_tagged()
> > > > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs)
> ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >             |
> >
> ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >         644 |               "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/stddef.h:16:32: note: expanded from macro
> > 'offsetof'
> > > > > > > >          16 | #define offsetof(TYPE, MEMBER)
> __builtin_offsetof(TYPE,
> > MEMBER)
> > > > > > > >             |                                 ^
> > > > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro
> > 'static_assert'
> > > > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr,
> > ##__VA_ARGS__, #expr)
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro
> > '__static_assert'
> > > > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr,
> > msg)
> > > > > > > >             |                                                        ^~~~
> > > > > > > >       include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates
> > to '56 == 52'
> > > >
> > > > This seems to give some indication that perhaps there may be some
> > > > architecture specific here with padding maybe? I seem to recall ARM OABI
> > > > having something similar. Adding the Hexagon folks/list to get some more
> > > > clarification. Full warning and context:
> > > >
> > > > https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/
> > > >
> 
> There might be hexagon-specific padding requirements, but not ones that I've
> stumbled across before.  I've added Sundeep from the compiler team who may
> be able to help.

Steve suggested I try dumping the record layouts using clang's "-fdump-record-layouts".  I did so using the clang 18.1.8 binary from https://github.com/llvm/llvm-project/releases/tag/llvmorg-18.1.8 -- I thought it was reasonable to use this older release because it still generates the static assertion.  But I can repeat it with clang-20 built from the bot's cited commit if preferred.

Steve - can you give any advice about the compiler's behavior wrt this struct layout and the assertion?

I've attached the unabridged output from clang with "-fdump-record-layouts".  Here's an excerpt:

*** Dumping AST Record Layout
         0 | struct uverbs_attr_bundle_hdr
         0 |   struct ib_udata driver_udata
         0 |     const void * inbuf
         4 |     void * outbuf
         8 |     size_t inlen
        12 |     size_t outlen
        16 |   struct ib_udata ucore
        16 |     const void * inbuf
        20 |     void * outbuf
        24 |     size_t inlen
        28 |     size_t outlen
        32 |   struct ib_uverbs_file * ufile
        36 |   struct ib_ucontext * context
        40 |   struct ib_uobject * uobject
        44 |   unsigned long[2] attr_present
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | union uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2)
         0 |   struct uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2) 
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
        28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
         0 |   struct uverbs_attr_bundle_hdr hdr
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
    In file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:49:
In file included from ../include/rdma/uverbs_std_types.h:10:
../include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
   16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
      |                                 ^
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
../include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
    28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
           | [sizeof=52, align=4]

I've also attached the "clang -cc1" invocation and preprocessed C output for reference.

> > > > The problematic section preprocessed since sometimes the macros
> > > > obfuscate things:
> > > >
> > > > struct uverbs_attr_bundle {
> > > >          union {
> > > >                  struct {
> > > >                          struct ib_udata driver_udata;
> > > >                          struct ib_udata ucore;
> > > >                          struct ib_uverbs_file *ufile;
> > > >                          struct ib_ucontext *context;
> > > >                          struct ib_uobject *uobject;
> > > >                          unsigned long
> attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> > +
> > > >                                                       ((sizeof(long) * 8)) - 1) /
> > > >                                                      ((sizeof(long) * 8)))];
> > > >                  };
> > > >                  struct uverbs_attr_bundle_hdr {
> > > >                          struct ib_udata driver_udata;
> > > >                          struct ib_udata ucore;
> > > >                          struct ib_uverbs_file *ufile;
> > > >                          struct ib_ucontext *context;
> > > >                          struct ib_uobject *uobject;
> > > >                          unsigned long
> attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> > +
> > > >                                                       ((sizeof(long) * 8)) - 1) /
> > > >                                                      ((sizeof(long) * 8)))];
> > > >                  } hdr;
> > > >          };
> > > >
> > > >          struct uverbs_attr attrs[];
> > > > };
> > > > _Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> > > >                         sizeof(struct uverbs_attr_bundle_hdr),
> > > >                 "struct member likely outside of struct_group_tagged()");
> > > >
> > > > FWIW, I see this with all versions of Clang that the kernel supports
> > > > with this configuration.
> > >
> > > I don't have access to a Clang compiler right now; I wonder if you could
> > > help me get the output of this command:
> > >
> > > pahole -C uverbs_attr_bundle drivers/infiniband/core/rdma_core.o
> >
> > We disabled CONFIG_DEBUG_INFO_BTF for Hexagon because elfutils does
> not
> > support Hexagon relocations but this is built-in for this configuration
> > so I removed that limitation and ended up with:
> >
> > $ pahole -C uverbs_attr_bundle vmlinux
> > struct uverbs_attr_bundle {
> >         union {
> >                 struct {
> >                         struct ib_udata driver_udata;    /*     0    16 */
> >                         struct ib_udata ucore;           /*    16    16 */
> >                         struct ib_uverbs_file * ufile;   /*    32     4 */
> >                         struct ib_ucontext * context;    /*    36     4 */
> >                         struct ib_uobject * uobject;     /*    40     4 */
> >                         unsigned long attr_present[2];   /*    44     8 */
> >                 };                                       /*     0    52 */
> >                 struct uverbs_attr_bundle_hdr hdr;       /*     0    52 */
> >         };                                               /*     0    52 */
> >
> >         /* XXX 4 bytes hole, try to pack */
> >         union {
> >                 struct {
> >                         struct ib_udata    driver_udata;         /*     0    16 */
> >                         struct ib_udata    ucore;                /*    16    16 */
> >                         struct ib_uverbs_file * ufile;           /*    32     4 */
> >                         struct ib_ucontext * context;            /*    36     4 */
> >                         struct ib_uobject * uobject;             /*    40     4 */
> >                         unsigned long      attr_present[2];      /*    44     8 */
> >                 };                                               /*     0    52 */
> >                 struct uverbs_attr_bundle_hdr hdr;               /*     0    52 */
> >         };
> >
> >
> >         struct uverbs_attr         attrs[];              /*    56     0 */
> >
> >         /* size: 56, cachelines: 1, members: 2 */
> >         /* sum members: 52, holes: 1, sum holes: 4 */
> >         /* last cacheline: 56 bytes */
> > };
> >
> > If you want any other information or want me to test anything, I am more
> > than happy to do so.
> >
> > Cheers,
> > Nathan
> >
> > > > > > > >         643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs)
> ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >             |
> >
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >         644 |               "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:77:50: note: expanded from macro
> > 'static_assert'
> > > > > > > >          77 | #define static_assert(expr, ...) __static_assert(expr,
> > ##__VA_ARGS__, #expr)
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:78:56: note: expanded from macro
> > '__static_assert'
> > > > > > > >          78 | #define __static_assert(expr, msg, ...) _Static_assert(expr,
> > msg)
> > > > > > > >             |                                                        ^~~~
> > > > > > > >       7 warnings and 1 error generated.
> > > > > > > >
> > > > > > > >
> > > > > > > > vim +643 include/rdma/uverbs_ioctl.h
> > > > > > > >
> > > > > > > >       630
> > > > > > > >       631   struct uverbs_attr_bundle {
> > > > > > > >       632           /* New members MUST be added within the
> > struct_group() macro below. */
> > > > > > > >       633           struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> > > > > > > >       634                   struct ib_udata driver_udata;
> > > > > > > >       635                   struct ib_udata ucore;
> > > > > > > >       636                   struct ib_uverbs_file *ufile;
> > > > > > > >       637                   struct ib_ucontext *context;
> > > > > > > >       638                   struct ib_uobject *uobject;
> > > > > > > >       639                   DECLARE_BITMAP(attr_present,
> > UVERBS_API_ATTR_BKEY_LEN);
> > > > > > > >       640           );
> > > > > > > >       641           struct uverbs_attr attrs[];
> > > > > > > >       642   };
> > > > > > > >     > 643   static_assert(offsetof(struct uverbs_attr_bundle, attrs) ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >       644                 "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >       645
> > > > > > > >
> > > > > > >
> > > > >
> > > > > Thanks
> > > > > --
> > > > > Gustavo


[-- Attachment #2: ib_core_uverbs.ii --]
[-- Type: application/octet-stream, Size: 4035986 bytes --]

# 1 "../drivers/infiniband/core/ib_core_uverbs.c"
# 1 "<built-in>" 1
# 1 "<built-in>" 3
# 357 "<built-in>" 3
# 1 "<command line>" 1
# 1 "<built-in>" 2
# 1 "./../include/linux/compiler-version.h" 1
# 2 "<built-in>" 2
# 1 "./../include/linux/kconfig.h" 1




# 1 "./include/generated/autoconf.h" 1
# 6 "./../include/linux/kconfig.h" 2
# 3 "<built-in>" 2
# 1 "./../include/linux/compiler_types.h" 1
# 89 "./../include/linux/compiler_types.h"
# 1 "../include/linux/compiler_attributes.h" 1
# 90 "./../include/linux/compiler_types.h" 2
# 171 "./../include/linux/compiler_types.h"
# 1 "../include/linux/compiler-clang.h" 1
# 172 "./../include/linux/compiler_types.h" 2
# 191 "./../include/linux/compiler_types.h"
struct ftrace_branch_data {
 const char *func;
 const char *file;
 unsigned line;
 union {
  struct {
   unsigned long correct;
   unsigned long incorrect;
  };
  struct {
   unsigned long miss;
   unsigned long hit;
  };
  unsigned long miss_hit[2];
 };
};

struct ftrace_likely_data {
 struct ftrace_branch_data data;
 unsigned long constant;
};
# 4 "<built-in>" 2
# 1 "../drivers/infiniband/core/ib_core_uverbs.c" 2






# 1 "../include/linux/xarray.h" 1
# 12 "../include/linux/xarray.h"
# 1 "../include/linux/bitmap.h" 1






# 1 "../include/linux/align.h" 1




# 1 "../include/linux/const.h" 1



# 1 "../include/vdso/const.h" 1




# 1 "../include/uapi/linux/const.h" 1
# 6 "../include/vdso/const.h" 2
# 5 "../include/linux/const.h" 2
# 6 "../include/linux/align.h" 2
# 8 "../include/linux/bitmap.h" 2
# 1 "../include/linux/bitops.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 1 "../include/uapi/asm-generic/types.h" 1






# 1 "../include/asm-generic/int-ll64.h" 1
# 11 "../include/asm-generic/int-ll64.h"
# 1 "../include/uapi/asm-generic/int-ll64.h" 1
# 12 "../include/uapi/asm-generic/int-ll64.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 1 "../include/asm-generic/bitsperlong.h" 1




# 1 "../include/uapi/asm-generic/bitsperlong.h" 1
# 6 "../include/asm-generic/bitsperlong.h" 2
# 2 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 2
# 13 "../include/uapi/asm-generic/int-ll64.h" 2







typedef __signed__ char __s8;
typedef unsigned char __u8;

typedef __signed__ short __s16;
typedef unsigned short __u16;

typedef __signed__ int __s32;
typedef unsigned int __u32;


__extension__ typedef __signed__ long long __s64;
__extension__ typedef unsigned long long __u64;
# 12 "../include/asm-generic/int-ll64.h" 2




typedef __s8 s8;
typedef __u8 u8;
typedef __s16 s16;
typedef __u16 u16;
typedef __s32 s32;
typedef __u32 u32;
typedef __s64 s64;
typedef __u64 u64;
# 8 "../include/uapi/asm-generic/types.h" 2
# 2 "./arch/hexagon/include/generated/uapi/asm/types.h" 2
# 6 "../include/linux/bitops.h" 2
# 1 "../include/linux/bits.h" 1





# 1 "../include/vdso/bits.h" 1
# 7 "../include/linux/bits.h" 2
# 1 "../include/uapi/linux/bits.h" 1
# 8 "../include/linux/bits.h" 2
# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 9 "../include/linux/bits.h" 2
# 22 "../include/linux/bits.h"
# 1 "../include/linux/build_bug.h" 1




# 1 "../include/linux/compiler.h" 1
# 15 "../include/linux/compiler.h"
void ftrace_likely_update(struct ftrace_likely_data *f, int val,
     int expect, int is_constant);
# 235 "../include/linux/compiler.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *offset_to_ptr(const int *off)
{
 return (void *)((unsigned long)off + *off);
}
# 305 "../include/linux/compiler.h"
# 1 "./arch/hexagon/include/generated/asm/rwonce.h" 1
# 1 "../include/asm-generic/rwonce.h" 1
# 26 "../include/asm-generic/rwonce.h"
# 1 "../include/linux/kasan-checks.h" 1




# 1 "../include/linux/types.h" 1





# 1 "../include/uapi/linux/types.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 6 "../include/uapi/linux/types.h" 2








# 1 "../include/uapi/linux/posix_types.h" 1




# 1 "../include/linux/stddef.h" 1




# 1 "../include/uapi/linux/stddef.h" 1
# 6 "../include/linux/stddef.h" 2




enum {
 false = 0,
 true = 1
};
# 6 "../include/uapi/linux/posix_types.h" 2
# 25 "../include/uapi/linux/posix_types.h"
typedef struct {
 unsigned long fds_bits[1024 / (8 * sizeof(long))];
} __kernel_fd_set;


typedef void (*__kernel_sighandler_t)(int);


typedef int __kernel_key_t;
typedef int __kernel_mqd_t;

# 1 "./arch/hexagon/include/generated/uapi/asm/posix_types.h" 1
# 1 "../include/uapi/asm-generic/posix_types.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 6 "../include/uapi/asm-generic/posix_types.h" 2
# 15 "../include/uapi/asm-generic/posix_types.h"
typedef long __kernel_long_t;
typedef unsigned long __kernel_ulong_t;



typedef __kernel_ulong_t __kernel_ino_t;



typedef unsigned int __kernel_mode_t;



typedef int __kernel_pid_t;



typedef int __kernel_ipc_pid_t;



typedef unsigned int __kernel_uid_t;
typedef unsigned int __kernel_gid_t;



typedef __kernel_long_t __kernel_suseconds_t;



typedef int __kernel_daddr_t;



typedef unsigned int __kernel_uid32_t;
typedef unsigned int __kernel_gid32_t;



typedef __kernel_uid_t __kernel_old_uid_t;
typedef __kernel_gid_t __kernel_old_gid_t;



typedef unsigned int __kernel_old_dev_t;
# 68 "../include/uapi/asm-generic/posix_types.h"
typedef unsigned int __kernel_size_t;
typedef int __kernel_ssize_t;
typedef int __kernel_ptrdiff_t;
# 79 "../include/uapi/asm-generic/posix_types.h"
typedef struct {
 int val[2];
} __kernel_fsid_t;





typedef __kernel_long_t __kernel_off_t;
typedef long long __kernel_loff_t;
typedef __kernel_long_t __kernel_old_time_t;



typedef long long __kernel_time64_t;
typedef __kernel_long_t __kernel_clock_t;
typedef int __kernel_timer_t;
typedef int __kernel_clockid_t;
typedef char * __kernel_caddr_t;
typedef unsigned short __kernel_uid16_t;
typedef unsigned short __kernel_gid16_t;
# 2 "./arch/hexagon/include/generated/uapi/asm/posix_types.h" 2
# 37 "../include/uapi/linux/posix_types.h" 2
# 15 "../include/uapi/linux/types.h" 2
# 36 "../include/uapi/linux/types.h"
typedef __u16 __le16;
typedef __u16 __be16;
typedef __u32 __le32;
typedef __u32 __be32;
typedef __u64 __le64;
typedef __u64 __be64;

typedef __u16 __sum16;
typedef __u32 __wsum;
# 59 "../include/uapi/linux/types.h"
typedef unsigned __poll_t;
# 7 "../include/linux/types.h" 2
# 18 "../include/linux/types.h"
typedef u32 __kernel_dev_t;

typedef __kernel_fd_set fd_set;
typedef __kernel_dev_t dev_t;
typedef __kernel_ulong_t ino_t;
typedef __kernel_mode_t mode_t;
typedef unsigned short umode_t;
typedef u32 nlink_t;
typedef __kernel_off_t off_t;
typedef __kernel_pid_t pid_t;
typedef __kernel_daddr_t daddr_t;
typedef __kernel_key_t key_t;
typedef __kernel_suseconds_t suseconds_t;
typedef __kernel_timer_t timer_t;
typedef __kernel_clockid_t clockid_t;
typedef __kernel_mqd_t mqd_t;

typedef _Bool bool;

typedef __kernel_uid32_t uid_t;
typedef __kernel_gid32_t gid_t;
typedef __kernel_uid16_t uid16_t;
typedef __kernel_gid16_t gid16_t;

typedef unsigned long uintptr_t;
typedef long intptr_t;
# 52 "../include/linux/types.h"
typedef __kernel_loff_t loff_t;
# 61 "../include/linux/types.h"
typedef __kernel_size_t size_t;




typedef __kernel_ssize_t ssize_t;




typedef __kernel_ptrdiff_t ptrdiff_t;




typedef __kernel_clock_t clock_t;




typedef __kernel_caddr_t caddr_t;



typedef unsigned char u_char;
typedef unsigned short u_short;
typedef unsigned int u_int;
typedef unsigned long u_long;


typedef unsigned char unchar;
typedef unsigned short ushort;
typedef unsigned int uint;
typedef unsigned long ulong;




typedef u8 u_int8_t;
typedef s8 int8_t;
typedef u16 u_int16_t;
typedef s16 int16_t;
typedef u32 u_int32_t;
typedef s32 int32_t;



typedef u8 uint8_t;
typedef u16 uint16_t;
typedef u32 uint32_t;


typedef u64 uint64_t;
typedef u64 u_int64_t;
typedef s64 int64_t;
# 124 "../include/linux/types.h"
typedef s64 ktime_t;
# 134 "../include/linux/types.h"
typedef u64 sector_t;
typedef u64 blkcnt_t;
# 154 "../include/linux/types.h"
typedef u32 dma_addr_t;


typedef unsigned int gfp_t;
typedef unsigned int slab_flags_t;
typedef unsigned int fmode_t;




typedef u32 phys_addr_t;


typedef phys_addr_t resource_size_t;





typedef unsigned long irq_hw_number_t;

typedef struct {
 int counter;
} atomic_t;
# 187 "../include/linux/types.h"
typedef struct {
 atomic_t refcnt;
} rcuref_t;



struct list_head {
 struct list_head *next, *prev;
};

struct hlist_head {
 struct hlist_node *first;
};

struct hlist_node {
 struct hlist_node *next, **pprev;
};

struct ustat {
 __kernel_daddr_t f_tfree;



 unsigned long f_tinode;

 char f_fname[6];
 char f_fpack[6];
};
# 235 "../include/linux/types.h"
struct callback_head {
 struct callback_head *next;
 void (*func)(struct callback_head *head);
} __attribute__((aligned(sizeof(void *))));


typedef void (*rcu_callback_t)(struct callback_head *head);
typedef void (*call_rcu_func_t)(struct callback_head *head, rcu_callback_t func);

typedef void (*swap_r_func_t)(void *a, void *b, int size, const void *priv);
typedef void (*swap_func_t)(void *a, void *b, int size);

typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv);
typedef int (*cmp_func_t)(const void *a, const void *b);
# 6 "../include/linux/kasan-checks.h" 2
# 22 "../include/linux/kasan-checks.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __kasan_check_read(const volatile void *p, unsigned int size)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __kasan_check_write(const volatile void *p, unsigned int size)
{
 return true;
}
# 40 "../include/linux/kasan-checks.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_check_read(const volatile void *p, unsigned int size)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_check_write(const volatile void *p, unsigned int size)
{
 return true;
}
# 27 "../include/asm-generic/rwonce.h" 2
# 1 "../include/linux/kcsan-checks.h" 1
# 189 "../include/linux/kcsan-checks.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_check_access(const volatile void *ptr, size_t size,
     int type) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_mb(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_wmb(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_rmb(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_release(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_disable_current(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_enable_current(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_enable_current_nowarn(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_nestable_atomic_begin(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_nestable_atomic_end(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_flat_atomic_begin(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_flat_atomic_end(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_atomic_next(int n) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_set_access_mask(unsigned long mask) { }

struct kcsan_scoped_access { };

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
     struct kcsan_scoped_access *sa) { return sa; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { }
# 229 "../include/linux/kcsan-checks.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_check_access(const volatile void *ptr, size_t size,
          int type) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_enable_current(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kcsan_disable_current(void) { }
# 28 "../include/asm-generic/rwonce.h" 2
# 64 "../include/asm-generic/rwonce.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long __read_once_word_nocheck(const void *addr)
{
 return (*(const volatile typeof( _Generic((*(unsigned long *)addr), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*(unsigned long *)addr))) *)&(*(unsigned long *)addr));
}
# 82 "../include/asm-generic/rwonce.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long read_word_at_a_time(const void *addr)
{
 kasan_check_read(addr, 1);
 return *(unsigned long *)addr;
}
# 2 "./arch/hexagon/include/generated/asm/rwonce.h" 2
# 306 "../include/linux/compiler.h" 2
# 6 "../include/linux/build_bug.h" 2
# 23 "../include/linux/bits.h" 2
# 7 "../include/linux/bitops.h" 2
# 1 "../include/linux/typecheck.h" 1
# 8 "../include/linux/bitops.h" 2

# 1 "../include/uapi/linux/kernel.h" 1




# 1 "../include/uapi/linux/sysinfo.h" 1







struct sysinfo {
 __kernel_long_t uptime;
 __kernel_ulong_t loads[3];
 __kernel_ulong_t totalram;
 __kernel_ulong_t freeram;
 __kernel_ulong_t sharedram;
 __kernel_ulong_t bufferram;
 __kernel_ulong_t totalswap;
 __kernel_ulong_t freeswap;
 __u16 procs;
 __u16 pad;
 __kernel_ulong_t totalhigh;
 __kernel_ulong_t freehigh;
 __u32 mem_unit;
 char _f[20-2*sizeof(__kernel_ulong_t)-sizeof(__u32)];
};
# 6 "../include/uapi/linux/kernel.h" 2
# 10 "../include/linux/bitops.h" 2
# 19 "../include/linux/bitops.h"
extern unsigned int __sw_hweight8(unsigned int w);
extern unsigned int __sw_hweight16(unsigned int w);
extern unsigned int __sw_hweight32(unsigned int w);
extern unsigned long __sw_hweight64(__u64 w);






# 1 "../include/asm-generic/bitops/generic-non-atomic.h" 1






# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 1 "../include/asm-generic/barrier.h" 1
# 18 "../include/asm-generic/barrier.h"
# 1 "./arch/hexagon/include/generated/asm/rwonce.h" 1
# 19 "../include/asm-generic/barrier.h" 2
# 2 "./arch/hexagon/include/generated/asm/barrier.h" 2
# 8 "../include/asm-generic/bitops/generic-non-atomic.h" 2
# 27 "../include/asm-generic/bitops/generic-non-atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
generic___set_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);

 *p |= mask;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
generic___clear_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);

 *p &= ~mask;
}
# 54 "../include/asm-generic/bitops/generic-non-atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
generic___change_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);

 *p ^= mask;
}
# 72 "../include/asm-generic/bitops/generic-non-atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
generic___test_and_set_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);
 unsigned long old = *p;

 *p = old | mask;
 return (old & mask) != 0;
}
# 92 "../include/asm-generic/bitops/generic-non-atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
generic___test_and_clear_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);
 unsigned long old = *p;

 *p = old & ~mask;
 return (old & mask) != 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
generic___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
{
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);
 unsigned long old = *p;

 *p = old ^ mask;
 return (old & mask) != 0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
generic_test_bit(unsigned long nr, const volatile unsigned long *addr)
{





 return 1UL & (addr[((nr) / 32)] >> (nr & (32 -1)));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
generic_test_bit_acquire(unsigned long nr, const volatile unsigned long *addr)
{
 unsigned long *p = ((unsigned long *)addr) + ((nr) / 32);
 return 1UL & (({ typeof( _Generic((*p), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*p))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_0(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*p) == sizeof(char) || sizeof(*p) == sizeof(short) || sizeof(*p) == sizeof(int) || sizeof(*p) == sizeof(long)) || sizeof(*p) == sizeof(long long))) __compiletime_assert_0(); } while (0); (*(const volatile typeof( _Generic((*p), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*p))) *)&(*p)); }); __asm__ __volatile__("": : :"memory"); (typeof(*p))___p1; }) >> (nr & (32 -1)));
}
# 165 "../include/asm-generic/bitops/generic-non-atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
const_test_bit(unsigned long nr, const volatile unsigned long *addr)
{
 const unsigned long *p = (const unsigned long *)addr + ((nr) / 32);
 unsigned long mask = ((((1UL))) << ((nr) % 32));
 unsigned long val = *p;

 return !!(val & mask);
}
# 30 "../include/linux/bitops.h" 2
# 68 "../include/linux/bitops.h"
# 1 "../arch/hexagon/include/asm/bitops.h" 1
# 12 "../arch/hexagon/include/asm/bitops.h"
# 1 "../arch/hexagon/include/uapi/asm/byteorder.h" 1
# 27 "../arch/hexagon/include/uapi/asm/byteorder.h"
# 1 "../include/linux/byteorder/little_endian.h" 1




# 1 "../include/uapi/linux/byteorder/little_endian.h" 1
# 14 "../include/uapi/linux/byteorder/little_endian.h"
# 1 "../include/linux/swab.h" 1




# 1 "../include/uapi/linux/swab.h" 1






# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 8 "../include/uapi/linux/swab.h" 2
# 1 "../arch/hexagon/include/uapi/asm/swab.h" 1
# 9 "../include/uapi/linux/swab.h" 2
# 48 "../include/uapi/linux/swab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) __u16 __fswab16(__u16 val)
{



 return ((__u16)( (((__u16)(val) & (__u16)0x00ffU) << 8) | (((__u16)(val) & (__u16)0xff00U) >> 8)));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) __u32 __fswab32(__u32 val)
{



 return ((__u32)( (((__u32)(val) & (__u32)0x000000ffUL) << 24) | (((__u32)(val) & (__u32)0x0000ff00UL) << 8) | (((__u32)(val) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(val) & (__u32)0xff000000UL) >> 24)));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) __u64 __fswab64(__u64 val)
{



 __u32 h = val >> 32;
 __u32 l = val & ((1ULL << 32) - 1);
 return (((__u64)__fswab32(l)) << 32) | ((__u64)(__fswab32(h)));



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) __u32 __fswahw32(__u32 val)
{



 return ((__u32)( (((__u32)(val) & (__u32)0x0000ffffUL) << 16) | (((__u32)(val) & (__u32)0xffff0000UL) >> 16)));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) __u32 __fswahb32(__u32 val)
{



 return ((__u32)( (((__u32)(val) & (__u32)0x00ff00ffUL) << 8) | (((__u32)(val) & (__u32)0xff00ff00UL) >> 8)));

}
# 136 "../include/uapi/linux/swab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long __swab(const unsigned long y)
{



 return (__u32)(__builtin_constant_p(y) ? ((__u32)( (((__u32)(y) & (__u32)0x000000ffUL) << 24) | (((__u32)(y) & (__u32)0x0000ff00UL) << 8) | (((__u32)(y) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(y) & (__u32)0xff000000UL) >> 24))) : __fswab32(y));

}
# 171 "../include/uapi/linux/swab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u16 __swab16p(const __u16 *p)
{



 return (__u16)(__builtin_constant_p(*p) ? ((__u16)( (((__u16)(*p) & (__u16)0x00ffU) << 8) | (((__u16)(*p) & (__u16)0xff00U) >> 8))) : __fswab16(*p));

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u32 __swab32p(const __u32 *p)
{



 return (__u32)(__builtin_constant_p(*p) ? ((__u32)( (((__u32)(*p) & (__u32)0x000000ffUL) << 24) | (((__u32)(*p) & (__u32)0x0000ff00UL) << 8) | (((__u32)(*p) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(*p) & (__u32)0xff000000UL) >> 24))) : __fswab32(*p));

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u64 __swab64p(const __u64 *p)
{



 return (__u64)(__builtin_constant_p(*p) ? ((__u64)( (((__u64)(*p) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(*p) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(*p) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(*p) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(*p) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(*p) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(*p) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(*p) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(*p));

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 __swahw32p(const __u32 *p)
{



 return (__builtin_constant_p((__u32)(*p)) ? ((__u32)( (((__u32)(*p) & (__u32)0x0000ffffUL) << 16) | (((__u32)(*p) & (__u32)0xffff0000UL) >> 16))) : __fswahw32(*p));

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 __swahb32p(const __u32 *p)
{



 return (__builtin_constant_p((__u32)(*p)) ? ((__u32)( (((__u32)(*p) & (__u32)0x00ff00ffUL) << 8) | (((__u32)(*p) & (__u32)0xff00ff00UL) >> 8))) : __fswahb32(*p));

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __swab16s(__u16 *p)
{



 *p = __swab16p(p);

}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __swab32s(__u32 *p)
{



 *p = __swab32p(p);

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __swab64s(__u64 *p)
{



 *p = __swab64p(p);

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __swahw32s(__u32 *p)
{



 *p = __swahw32p(p);

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __swahb32s(__u32 *p)
{



 *p = __swahb32p(p);

}
# 6 "../include/linux/swab.h" 2
# 24 "../include/linux/swab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void swab16_array(u16 *buf, unsigned int words)
{
 while (words--) {
  __swab16s(buf);
  buf++;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void swab32_array(u32 *buf, unsigned int words)
{
 while (words--) {
  __swab32s(buf);
  buf++;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void swab64_array(u64 *buf, unsigned int words)
{
 while (words--) {
  __swab64s(buf);
  buf++;
 }
}
# 15 "../include/uapi/linux/byteorder/little_endian.h" 2
# 45 "../include/uapi/linux/byteorder/little_endian.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __le64 __cpu_to_le64p(const __u64 *p)
{
 return ( __le64)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u64 __le64_to_cpup(const __le64 *p)
{
 return ( __u64)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __le32 __cpu_to_le32p(const __u32 *p)
{
 return ( __le32)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u32 __le32_to_cpup(const __le32 *p)
{
 return ( __u32)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __le16 __cpu_to_le16p(const __u16 *p)
{
 return ( __le16)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u16 __le16_to_cpup(const __le16 *p)
{
 return ( __u16)*p;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __be64 __cpu_to_be64p(const __u64 *p)
{
 return ( __be64)__swab64p(p);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u64 __be64_to_cpup(const __be64 *p)
{
 return __swab64p((__u64 *)p);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __be32 __cpu_to_be32p(const __u32 *p)
{
 return ( __be32)__swab32p(p);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u32 __be32_to_cpup(const __be32 *p)
{
 return __swab32p((__u32 *)p);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __be16 __cpu_to_be16p(const __u16 *p)
{
 return ( __be16)__swab16p(p);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u16 __be16_to_cpup(const __be16 *p)
{
 return __swab16p((__u16 *)p);
}
# 6 "../include/linux/byteorder/little_endian.h" 2





# 1 "../include/linux/byteorder/generic.h" 1
# 144 "../include/linux/byteorder/generic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void le16_add_cpu(__le16 *var, u16 val)
{
 *var = (( __le16)(__u16)((( __u16)(__le16)(*var)) + val));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void le32_add_cpu(__le32 *var, u32 val)
{
 *var = (( __le32)(__u32)((( __u32)(__le32)(*var)) + val));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void le64_add_cpu(__le64 *var, u64 val)
{
 *var = (( __le64)(__u64)((( __u64)(__le64)(*var)) + val));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void le32_to_cpu_array(u32 *buf, unsigned int words)
{
 while (words--) {
  do { (void)(buf); } while (0);
  buf++;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_to_le32_array(u32 *buf, unsigned int words)
{
 while (words--) {
  do { (void)(buf); } while (0);
  buf++;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void be16_add_cpu(__be16 *var, u16 val)
{
 *var = (( __be16)(__u16)(__builtin_constant_p(((__u16)(__builtin_constant_p(( __u16)(__be16)(*var)) ? ((__u16)( (((__u16)(( __u16)(__be16)(*var)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(*var)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(*var))) + val)) ? ((__u16)( (((__u16)(((__u16)(__builtin_constant_p(( __u16)(__be16)(*var)) ? ((__u16)( (((__u16)(( __u16)(__be16)(*var)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(*var)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(*var))) + val)) & (__u16)0x00ffU) << 8) | (((__u16)(((__u16)(__builtin_constant_p(( __u16)(__be16)(*var)) ? ((__u16)( (((__u16)(( __u16)(__be16)(*var)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(*var)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(*var))) + val)) & (__u16)0xff00U) >> 8))) : __fswab16(((__u16)(__builtin_constant_p(( __u16)(__be16)(*var)) ? ((__u16)( (((__u16)(( __u16)(__be16)(*var)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(*var)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(*var))) + val))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void be32_add_cpu(__be32 *var, u32 val)
{
 *var = (( __be32)(__u32)(__builtin_constant_p(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val)) ? ((__u32)( (((__u32)(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val)) & (__u32)0x000000ffUL) << 24) | (((__u32)(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val)) & (__u32)0xff000000UL) >> 24))) : __fswab32(((__u32)(__builtin_constant_p(( __u32)(__be32)(*var)) ? ((__u32)( (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(*var)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(*var))) + val))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void be64_add_cpu(__be64 *var, u64 val)
{
 *var = (( __be64)(__u64)(__builtin_constant_p(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) ? ((__u64)( (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(((__u64)(__builtin_constant_p(( __u64)(__be64)(*var)) ? ((__u64)( (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(*var)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(*var))) + val))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_to_be32_array(__be32 *dst, const u32 *src, size_t len)
{
 size_t i;

 for (i = 0; i < len; i++)
  dst[i] = (( __be32)(__u32)(__builtin_constant_p((src[i])) ? ((__u32)( (((__u32)((src[i])) & (__u32)0x000000ffUL) << 24) | (((__u32)((src[i])) & (__u32)0x0000ff00UL) << 8) | (((__u32)((src[i])) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((src[i])) & (__u32)0xff000000UL) >> 24))) : __fswab32((src[i]))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void be32_to_cpu_array(u32 *dst, const __be32 *src, size_t len)
{
 size_t i;

 for (i = 0; i < len; i++)
  dst[i] = (__u32)(__builtin_constant_p(( __u32)(__be32)(src[i])) ? ((__u32)( (((__u32)(( __u32)(__be32)(src[i])) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(src[i])) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(src[i])) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(src[i])) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(src[i])));
}
# 12 "../include/linux/byteorder/little_endian.h" 2
# 28 "../arch/hexagon/include/uapi/asm/byteorder.h" 2
# 13 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "../arch/hexagon/include/asm/atomic.h" 1
# 12 "../arch/hexagon/include/asm/atomic.h"
# 1 "../arch/hexagon/include/asm/cmpxchg.h" 1
# 22 "../arch/hexagon/include/asm/cmpxchg.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
__arch_xchg(unsigned long x, volatile void *ptr, int size)
{
 unsigned long retval;


 if (size != 4) do { asm volatile("brkpt;\n"); } while (1);

 __asm__ __volatile__ (
 "1:	%0 = memw_locked(%1);\n"
 "	memw_locked(%1,P0) = %2;\n"
 "	if (!P0) jump 1b;\n"
 : "=&r" (retval)
 : "r" (ptr), "r" (x)
 : "memory", "p0"
 );
 return retval;
}
# 13 "../arch/hexagon/include/asm/atomic.h" 2
# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 14 "../arch/hexagon/include/asm/atomic.h" 2



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_set(atomic_t *v, int new)
{
 asm volatile(
  "1:	r6 = memw_locked(%0);\n"
  "	memw_locked(%0,p0) = %1;\n"
  "	if (!P0) jump 1b;\n"
  :
  : "r" (&v->counter), "r" (new)
  : "memory", "p0", "r6"
 );
}
# 85 "../arch/hexagon/include/asm/atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_add(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""add" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_add_return(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""add" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_add(int i, atomic_t *v) { int output, val; __asm__ __volatile__ ( "1:	%0 = memw_locked(%2);\n" "	%1 = ""add" "(%0,%3);\n" "	memw_locked(%2,P3)=%1;\n" "	if (!P3) jump 1b;\n" : "=&r" (output), "=&r" (val) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_sub(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""sub" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_sub_return(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""sub" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_sub(int i, atomic_t *v) { int output, val; __asm__ __volatile__ ( "1:	%0 = memw_locked(%2);\n" "	%1 = ""sub" "(%0,%3);\n" "	memw_locked(%2,P3)=%1;\n" "	if (!P3) jump 1b;\n" : "=&r" (output), "=&r" (val) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; }
# 96 "../arch/hexagon/include/asm/atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_and(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""and" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_and(int i, atomic_t *v) { int output, val; __asm__ __volatile__ ( "1:	%0 = memw_locked(%2);\n" "	%1 = ""and" "(%0,%3);\n" "	memw_locked(%2,P3)=%1;\n" "	if (!P3) jump 1b;\n" : "=&r" (output), "=&r" (val) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_or(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""or" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_or(int i, atomic_t *v) { int output, val; __asm__ __volatile__ ( "1:	%0 = memw_locked(%2);\n" "	%1 = ""or" "(%0,%3);\n" "	memw_locked(%2,P3)=%1;\n" "	if (!P3) jump 1b;\n" : "=&r" (output), "=&r" (val) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_atomic_xor(int i, atomic_t *v) { int output; __asm__ __volatile__ ( "1:	%0 = memw_locked(%1);\n" "	%0 = ""xor" "(%0,%2);\n" "	memw_locked(%1,P3)=%0;\n" "	if (!P3) jump 1b;\n" : "=&r" (output) : "r" (&v->counter), "r" (i) : "memory", "p3" ); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_xor(int i, atomic_t *v) { int output, val; __asm__ __volatile__ ( "1:	%0 = memw_locked(%2);\n" "	%1 = ""xor" "(%0,%3);\n" "	memw_locked(%2,P3)=%1;\n" "	if (!P3) jump 1b;\n" : "=&r" (output), "=&r" (val) : "r" (&v->counter), "r" (i) : "memory", "p3" ); return output; }
# 109 "../arch/hexagon/include/asm/atomic.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
 int __oldval;
 register int tmp;

 asm volatile(
  "1:	%0 = memw_locked(%2);"
  "	{"
  "		p3 = cmp.eq(%0, %4);"
  "		if (p3.new) jump:nt 2f;"
  "		%1 = add(%0, %3);"
  "	}"
  "	memw_locked(%2, p3) = %1;"
  "	{"
  "		if (!p3) jump 1b;"
  "	}"
  "2:"
  : "=&r" (__oldval), "=&r" (tmp)
  : "r" (v), "r" (a), "r" (u)
  : "memory", "p3"
 );
 return __oldval;
}
# 14 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 15 "../arch/hexagon/include/asm/bitops.h" 2
# 31 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_clear_bit(int nr, volatile void *addr)
{
 int oldval;

 __asm__ __volatile__ (
 "	{R10 = %1; R11 = asr(%2,#5); }\n"
 "	{R10 += asl(R11,#2); R11 = and(%2,#0x1f)}\n"
 "1:	R12 = memw_locked(R10);\n"
 "	{ P0 = tstbit(R12,R11); R12 = clrbit(R12,R11); }\n"
 "	memw_locked(R10,P1) = R12;\n"
 "	{if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n"
 : "=&r" (oldval)
 : "r" (addr), "r" (nr)
 : "r10", "r11", "r12", "p0", "p1", "memory"
 );

 return oldval;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_set_bit(int nr, volatile void *addr)
{
 int oldval;

 __asm__ __volatile__ (
 "	{R10 = %1; R11 = asr(%2,#5); }\n"
 "	{R10 += asl(R11,#2); R11 = and(%2,#0x1f)}\n"
 "1:	R12 = memw_locked(R10);\n"
 "	{ P0 = tstbit(R12,R11); R12 = setbit(R12,R11); }\n"
 "	memw_locked(R10,P1) = R12;\n"
 "	{if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n"
 : "=&r" (oldval)
 : "r" (addr), "r" (nr)
 : "r10", "r11", "r12", "p0", "p1", "memory"
 );


 return oldval;

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_change_bit(int nr, volatile void *addr)
{
 int oldval;

 __asm__ __volatile__ (
 "	{R10 = %1; R11 = asr(%2,#5); }\n"
 "	{R10 += asl(R11,#2); R11 = and(%2,#0x1f)}\n"
 "1:	R12 = memw_locked(R10);\n"
 "	{ P0 = tstbit(R12,R11); R12 = togglebit(R12,R11); }\n"
 "	memw_locked(R10,P1) = R12;\n"
 "	{if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n"
 : "=&r" (oldval)
 : "r" (addr), "r" (nr)
 : "r10", "r11", "r12", "p0", "p1", "memory"
 );

 return oldval;

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_bit(int nr, volatile void *addr)
{
 test_and_clear_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_bit(int nr, volatile void *addr)
{
 test_and_set_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void change_bit(int nr, volatile void *addr)
{
 test_and_change_bit(nr, addr);
}
# 130 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
arch___clear_bit(unsigned long nr, volatile unsigned long *addr)
{
 test_and_clear_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
arch___set_bit(unsigned long nr, volatile unsigned long *addr)
{
 test_and_set_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
arch___change_bit(unsigned long nr, volatile unsigned long *addr)
{
 test_and_change_bit(nr, addr);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
arch___test_and_clear_bit(unsigned long nr, volatile unsigned long *addr)
{
 return test_and_clear_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
arch___test_and_set_bit(unsigned long nr, volatile unsigned long *addr)
{
 return test_and_set_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
{
 return test_and_change_bit(nr, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
arch_test_bit(unsigned long nr, const volatile unsigned long *addr)
{
 int retval;

 asm volatile(
 "{P0 = tstbit(%1,%2); if (P0.new) %0 = #1; if (!P0.new) %0 = #0;}\n"
 : "=&r" (retval)
 : "r" (addr[((nr) / 32)]), "r" (nr % 32)
 : "p0"
 );

 return retval;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
arch_test_bit_acquire(unsigned long nr, const volatile unsigned long *addr)
{
 int retval;

 asm volatile(
 "{P0 = tstbit(%1,%2); if (P0.new) %0 = #1; if (!P0.new) %0 = #0;}\n"
 : "=&r" (retval)
 : "r" (addr[((nr) / 32)]), "r" (nr % 32)
 : "p0", "memory"
 );

 return retval;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long ffz(int x)
{
 int r;

 asm("%0 = ct1(%1);\n"
  : "=&r" (r)
  : "r" (x));
 return r;
}
# 220 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fls(unsigned int x)
{
 int r;

 asm("{ %0 = cl0(%1);}\n"
  "%0 = sub(#32,%0);\n"
  : "=&r" (r)
  : "r" (x)
  : "p0");

 return r;
}
# 241 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ffs(int x)
{
 int r;

 asm("{ P0 = cmp.eq(%1,#0); %0 = ct0(%1);}\n"
  "{ if (P0) %0 = #0; if (!P0) %0 = add(%0,#1);}\n"
  : "=&r" (r)
  : "r" (x)
  : "p0");

 return r;
}
# 263 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __ffs(unsigned long word)
{
 int num;

 asm("%0 = ct0(%1);\n"
  : "=&r" (num)
  : "r" (word));

 return num;
}
# 281 "../arch/hexagon/include/asm/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __fls(unsigned long word)
{
 int num;

 asm("%0 = cl0(%1);\n"
  "%0 = sub(#31,%0);\n"
  : "=&r" (num)
  : "r" (word));

 return num;
}

# 1 "../include/asm-generic/bitops/lock.h" 1




# 1 "../include/linux/atomic.h" 1







# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 9 "../include/linux/atomic.h" 2
# 80 "../include/linux/atomic.h"
# 1 "../include/linux/atomic/atomic-arch-fallback.h" 1
# 103 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg64_not_implemented(void);
# 115 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg64_acquire_not_implemented(void);
# 127 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg64_release_not_implemented(void);
# 136 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg64_relaxed_not_implemented(void);
# 146 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg128_not_implemented(void);
# 158 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg128_acquire_not_implemented(void);
# 170 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg128_release_not_implemented(void);
# 179 "../include/linux/atomic/atomic-arch-fallback.h"
extern void raw_cmpxchg128_relaxed_not_implemented(void);
# 454 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_read(const atomic_t *v)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_1(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((v)->counter) == sizeof(char) || sizeof((v)->counter) == sizeof(short) || sizeof((v)->counter) == sizeof(int) || sizeof((v)->counter) == sizeof(long)) || sizeof((v)->counter) == sizeof(long long))) __compiletime_assert_1(); } while (0); (*(const volatile typeof( _Generic(((v)->counter), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((v)->counter))) *)&((v)->counter)); });
}
# 470 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_read_acquire(const atomic_t *v)
{



 int ret;

 if ((sizeof(atomic_t) == sizeof(char) || sizeof(atomic_t) == sizeof(short) || sizeof(atomic_t) == sizeof(int) || sizeof(atomic_t) == sizeof(long))) {
  ret = ({ typeof( _Generic((*&(v)->counter), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&(v)->counter))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_2(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(v)->counter) == sizeof(char) || sizeof(*&(v)->counter) == sizeof(short) || sizeof(*&(v)->counter) == sizeof(int) || sizeof(*&(v)->counter) == sizeof(long)) || sizeof(*&(v)->counter) == sizeof(long long))) __compiletime_assert_2(); } while (0); (*(const volatile typeof( _Generic((*&(v)->counter), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&(v)->counter))) *)&(*&(v)->counter)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&(v)->counter))___p1; });
 } else {
  ret = raw_atomic_read(v);
  __asm__ __volatile__("": : :"memory");
 }

 return ret;

}
# 500 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_set(atomic_t *v, int i)
{
 arch_atomic_set(v, i);
}
# 517 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_set_release(atomic_t *v, int i)
{

 arch_atomic_set((v), (i));
# 530 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 543 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_add(int i, atomic_t *v)
{
 arch_atomic_add(i, v);
}
# 560 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_add_return(int i, atomic_t *v)
{

 return arch_atomic_add_return(i, v);
# 574 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 587 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_add_return_acquire(int i, atomic_t *v)
{







 return arch_atomic_add_return(i, v);



}
# 614 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_add_return_release(int i, atomic_t *v)
{






 return arch_atomic_add_return(i, v);



}
# 640 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_add_return_relaxed(int i, atomic_t *v)
{



 return arch_atomic_add_return(i, v);



}
# 663 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_add(int i, atomic_t *v)
{

 return arch_atomic_fetch_add(i, v);
# 677 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 690 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{







 return arch_atomic_fetch_add(i, v);



}
# 717 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_add_release(int i, atomic_t *v)
{






 return arch_atomic_fetch_add(i, v);



}
# 743 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
{



 return arch_atomic_fetch_add(i, v);



}
# 766 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_sub(int i, atomic_t *v)
{
 arch_atomic_sub(i, v);
}
# 783 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_sub_return(int i, atomic_t *v)
{

 return arch_atomic_sub_return(i, v);
# 797 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 810 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_sub_return_acquire(int i, atomic_t *v)
{







 return arch_atomic_sub_return(i, v);



}
# 837 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_sub_return_release(int i, atomic_t *v)
{






 return arch_atomic_sub_return(i, v);



}
# 863 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_sub_return_relaxed(int i, atomic_t *v)
{



 return arch_atomic_sub_return(i, v);



}
# 886 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_sub(int i, atomic_t *v)
{

 return arch_atomic_fetch_sub(i, v);
# 900 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 913 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{







 return arch_atomic_fetch_sub(i, v);



}
# 940 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_sub_release(int i, atomic_t *v)
{






 return arch_atomic_fetch_sub(i, v);



}
# 966 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
{



 return arch_atomic_fetch_sub(i, v);



}
# 988 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_inc(atomic_t *v)
{



 raw_atomic_add(1, v);

}
# 1008 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_inc_return(atomic_t *v)
{
# 1020 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return(1, v);

}
# 1034 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_inc_return_acquire(atomic_t *v)
{
# 1046 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return_acquire(1, v);

}
# 1060 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_inc_return_release(atomic_t *v)
{
# 1071 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return_release(1, v);

}
# 1085 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_inc_return_relaxed(atomic_t *v)
{





 return raw_atomic_add_return_relaxed(1, v);

}
# 1107 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_inc(atomic_t *v)
{
# 1119 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_add(1, v);

}
# 1133 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_inc_acquire(atomic_t *v)
{
# 1145 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_add_acquire(1, v);

}
# 1159 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_inc_release(atomic_t *v)
{
# 1170 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_add_release(1, v);

}
# 1184 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_inc_relaxed(atomic_t *v)
{





 return raw_atomic_fetch_add_relaxed(1, v);

}
# 1206 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_dec(atomic_t *v)
{



 raw_atomic_sub(1, v);

}
# 1226 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_dec_return(atomic_t *v)
{
# 1238 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_sub_return(1, v);

}
# 1252 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_dec_return_acquire(atomic_t *v)
{
# 1264 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_sub_return_acquire(1, v);

}
# 1278 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_dec_return_release(atomic_t *v)
{
# 1289 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_sub_return_release(1, v);

}
# 1303 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_dec_return_relaxed(atomic_t *v)
{





 return raw_atomic_sub_return_relaxed(1, v);

}
# 1325 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_dec(atomic_t *v)
{
# 1337 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_sub(1, v);

}
# 1351 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_dec_acquire(atomic_t *v)
{
# 1363 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_sub_acquire(1, v);

}
# 1377 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_dec_release(atomic_t *v)
{
# 1388 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_sub_release(1, v);

}
# 1402 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_dec_relaxed(atomic_t *v)
{





 return raw_atomic_fetch_sub_relaxed(1, v);

}
# 1425 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_and(int i, atomic_t *v)
{
 arch_atomic_and(i, v);
}
# 1442 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_and(int i, atomic_t *v)
{

 return arch_atomic_fetch_and(i, v);
# 1456 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 1469 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{







 return arch_atomic_fetch_and(i, v);



}
# 1496 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_and_release(int i, atomic_t *v)
{






 return arch_atomic_fetch_and(i, v);



}
# 1522 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
{



 return arch_atomic_fetch_and(i, v);



}
# 1545 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_andnot(int i, atomic_t *v)
{



 raw_atomic_and(~i, v);

}
# 1566 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_andnot(int i, atomic_t *v)
{
# 1578 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_and(~i, v);

}
# 1593 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
# 1605 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_and_acquire(~i, v);

}
# 1620 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
# 1631 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_fetch_and_release(~i, v);

}
# 1646 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{





 return raw_atomic_fetch_and_relaxed(~i, v);

}
# 1669 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_or(int i, atomic_t *v)
{
 arch_atomic_or(i, v);
}
# 1686 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_or(int i, atomic_t *v)
{

 return arch_atomic_fetch_or(i, v);
# 1700 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 1713 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{







 return arch_atomic_fetch_or(i, v);



}
# 1740 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_or_release(int i, atomic_t *v)
{






 return arch_atomic_fetch_or(i, v);



}
# 1766 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
{



 return arch_atomic_fetch_or(i, v);



}
# 1789 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_xor(int i, atomic_t *v)
{
 arch_atomic_xor(i, v);
}
# 1806 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_xor(int i, atomic_t *v)
{

 return arch_atomic_fetch_xor(i, v);
# 1820 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 1833 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{







 return arch_atomic_fetch_xor(i, v);



}
# 1860 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_xor_release(int i, atomic_t *v)
{






 return arch_atomic_fetch_xor(i, v);



}
# 1886 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
{



 return arch_atomic_fetch_xor(i, v);



}
# 1909 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_xchg(atomic_t *v, int new)
{
# 1921 "../include/linux/atomic/atomic-arch-fallback.h"
 return ((__typeof__(*(&v->counter)))__arch_xchg((unsigned long)(new), (&v->counter), sizeof(*(&v->counter))));

}
# 1936 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_xchg_acquire(atomic_t *v, int new)
{
# 1948 "../include/linux/atomic/atomic-arch-fallback.h"
 return ((__typeof__(*(&v->counter)))__arch_xchg((unsigned long)(new), (&v->counter), sizeof(*(&v->counter))));

}
# 1963 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_xchg_release(atomic_t *v, int new)
{
# 1974 "../include/linux/atomic/atomic-arch-fallback.h"
 return ((__typeof__(*(&v->counter)))__arch_xchg((unsigned long)(new), (&v->counter), sizeof(*(&v->counter))));

}
# 1989 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_xchg_relaxed(atomic_t *v, int new)
{





 return ((__typeof__(*(&v->counter)))__arch_xchg((unsigned long)(new), (&v->counter), sizeof(*(&v->counter))));

}
# 2014 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
# 2026 "../include/linux/atomic/atomic-arch-fallback.h"
 return ({ __typeof__(&v->counter) __ptr = (&v->counter); __typeof__(*(&v->counter)) __old = (old); __typeof__(*(&v->counter)) __new = (new); __typeof__(*(&v->counter)) __oldval = 0; asm volatile( "1:	%0 = memw_locked(%1);\n" "	{ P0 = cmp.eq(%0,%2);\n" "	  if (!P0.new) jump:nt 2f; }\n" "	memw_locked(%1,p0) = %3;\n" "	if (!P0) jump 1b;\n" "2:\n" : "=&r" (__oldval) : "r" (__ptr), "r" (__old), "r" (__new) : "memory", "p0" ); __oldval; });

}
# 2043 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
# 2055 "../include/linux/atomic/atomic-arch-fallback.h"
 return ({ __typeof__(&v->counter) __ptr = (&v->counter); __typeof__(*(&v->counter)) __old = (old); __typeof__(*(&v->counter)) __new = (new); __typeof__(*(&v->counter)) __oldval = 0; asm volatile( "1:	%0 = memw_locked(%1);\n" "	{ P0 = cmp.eq(%0,%2);\n" "	  if (!P0.new) jump:nt 2f; }\n" "	memw_locked(%1,p0) = %3;\n" "	if (!P0) jump 1b;\n" "2:\n" : "=&r" (__oldval) : "r" (__ptr), "r" (__old), "r" (__new) : "memory", "p0" ); __oldval; });

}
# 2072 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
# 2083 "../include/linux/atomic/atomic-arch-fallback.h"
 return ({ __typeof__(&v->counter) __ptr = (&v->counter); __typeof__(*(&v->counter)) __old = (old); __typeof__(*(&v->counter)) __new = (new); __typeof__(*(&v->counter)) __oldval = 0; asm volatile( "1:	%0 = memw_locked(%1);\n" "	{ P0 = cmp.eq(%0,%2);\n" "	  if (!P0.new) jump:nt 2f; }\n" "	memw_locked(%1,p0) = %3;\n" "	if (!P0) jump 1b;\n" "2:\n" : "=&r" (__oldval) : "r" (__ptr), "r" (__old), "r" (__new) : "memory", "p0" ); __oldval; });

}
# 2100 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{





 return ({ __typeof__(&v->counter) __ptr = (&v->counter); __typeof__(*(&v->counter)) __old = (old); __typeof__(*(&v->counter)) __new = (new); __typeof__(*(&v->counter)) __oldval = 0; asm volatile( "1:	%0 = memw_locked(%1);\n" "	{ P0 = cmp.eq(%0,%2);\n" "	  if (!P0.new) jump:nt 2f; }\n" "	memw_locked(%1,p0) = %3;\n" "	if (!P0) jump 1b;\n" "2:\n" : "=&r" (__oldval) : "r" (__ptr), "r" (__old), "r" (__new) : "memory", "p0" ); __oldval; });

}
# 2126 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
# 2138 "../include/linux/atomic/atomic-arch-fallback.h"
 int r, o = *old;
 r = raw_atomic_cmpxchg(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 2160 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
# 2172 "../include/linux/atomic/atomic-arch-fallback.h"
 int r, o = *old;
 r = raw_atomic_cmpxchg_acquire(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 2194 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
# 2205 "../include/linux/atomic/atomic-arch-fallback.h"
 int r, o = *old;
 r = raw_atomic_cmpxchg_release(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 2227 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{





 int r, o = *old;
 r = raw_atomic_cmpxchg_relaxed(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 2254 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_sub_and_test(int i, atomic_t *v)
{



 return raw_atomic_sub_return(i, v) == 0;

}
# 2274 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_dec_and_test(atomic_t *v)
{



 return raw_atomic_dec_return(v) == 0;

}
# 2294 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_inc_and_test(atomic_t *v)
{



 return raw_atomic_inc_return(v) == 0;

}
# 2315 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_add_negative(int i, atomic_t *v)
{
# 2327 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return(i, v) < 0;

}
# 2342 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
# 2354 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return_acquire(i, v) < 0;

}
# 2369 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_add_negative_release(int i, atomic_t *v)
{
# 2380 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic_add_return_release(i, v) < 0;

}
# 2395 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{





 return raw_atomic_add_return_relaxed(i, v) < 0;

}
# 2420 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{

 return arch_atomic_fetch_add_unless(v, a, u);
# 2435 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 2450 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_add_unless(atomic_t *v, int a, int u)
{



 return raw_atomic_fetch_add_unless(v, a, u) != u;

}
# 2471 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_inc_not_zero(atomic_t *v)
{



 return raw_atomic_add_unless(v, 1, 0);

}
# 2492 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_inc_unless_negative(atomic_t *v)
{



 int c = raw_atomic_read(v);

 do {
  if (__builtin_expect(!!(c < 0), 0))
   return false;
 } while (!raw_atomic_try_cmpxchg(v, &c, c + 1));

 return true;

}
# 2520 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_dec_unless_positive(atomic_t *v)
{



 int c = raw_atomic_read(v);

 do {
  if (__builtin_expect(!!(c > 0), 0))
   return false;
 } while (!raw_atomic_try_cmpxchg(v, &c, c - 1));

 return true;

}
# 2548 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_atomic_dec_if_positive(atomic_t *v)
{



 int dec, c = raw_atomic_read(v);

 do {
  dec = c - 1;
  if (__builtin_expect(!!(dec < 0), 0))
   break;
 } while (!raw_atomic_try_cmpxchg(v, &c, dec));

 return dec;

}


# 1 "../include/asm-generic/atomic64.h" 1
# 12 "../include/asm-generic/atomic64.h"
typedef struct {
 s64 counter;
} atomic64_t;



extern s64 generic_atomic64_read(const atomic64_t *v);
extern void generic_atomic64_set(atomic64_t *v, s64 i);
# 32 "../include/asm-generic/atomic64.h"
extern void generic_atomic64_add(s64 a, atomic64_t *v); extern s64 generic_atomic64_add_return(s64 a, atomic64_t *v); extern s64 generic_atomic64_fetch_add(s64 a, atomic64_t *v);
extern void generic_atomic64_sub(s64 a, atomic64_t *v); extern s64 generic_atomic64_sub_return(s64 a, atomic64_t *v); extern s64 generic_atomic64_fetch_sub(s64 a, atomic64_t *v);




extern void generic_atomic64_and(s64 a, atomic64_t *v); extern s64 generic_atomic64_fetch_and(s64 a, atomic64_t *v);
extern void generic_atomic64_or(s64 a, atomic64_t *v); extern s64 generic_atomic64_fetch_or(s64 a, atomic64_t *v);
extern void generic_atomic64_xor(s64 a, atomic64_t *v); extern s64 generic_atomic64_fetch_xor(s64 a, atomic64_t *v);






extern s64 generic_atomic64_dec_if_positive(atomic64_t *v);
extern s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
extern s64 generic_atomic64_xchg(atomic64_t *v, s64 new);
extern s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
# 2568 "../include/linux/atomic/atomic-arch-fallback.h" 2
# 2580 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_read(const atomic64_t *v)
{
 return generic_atomic64_read(v);
}
# 2596 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_read_acquire(const atomic64_t *v)
{



 s64 ret;

 if ((sizeof(atomic64_t) == sizeof(char) || sizeof(atomic64_t) == sizeof(short) || sizeof(atomic64_t) == sizeof(int) || sizeof(atomic64_t) == sizeof(long))) {
  ret = ({ typeof( _Generic((*&(v)->counter), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&(v)->counter))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_3(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(v)->counter) == sizeof(char) || sizeof(*&(v)->counter) == sizeof(short) || sizeof(*&(v)->counter) == sizeof(int) || sizeof(*&(v)->counter) == sizeof(long)) || sizeof(*&(v)->counter) == sizeof(long long))) __compiletime_assert_3(); } while (0); (*(const volatile typeof( _Generic((*&(v)->counter), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&(v)->counter))) *)&(*&(v)->counter)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&(v)->counter))___p1; });
 } else {
  ret = raw_atomic64_read(v);
  __asm__ __volatile__("": : :"memory");
 }

 return ret;

}
# 2626 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_set(atomic64_t *v, s64 i)
{
 generic_atomic64_set(v, i);
}
# 2643 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_set_release(atomic64_t *v, s64 i)
{

 generic_atomic64_set(v, i);
# 2656 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 2669 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_add(s64 i, atomic64_t *v)
{
 generic_atomic64_add(i, v);
}
# 2686 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_add_return(s64 i, atomic64_t *v)
{

 return generic_atomic64_add_return(i, v);
# 2700 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 2713 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_add_return(i, v);



}
# 2740 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_add_return(i, v);



}
# 2766 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_add_return(i, v);



}
# 2789 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{

 return generic_atomic64_fetch_add(i, v);
# 2803 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 2816 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_fetch_add(i, v);



}
# 2843 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_fetch_add(i, v);



}
# 2869 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_fetch_add(i, v);



}
# 2892 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_sub(s64 i, atomic64_t *v)
{
 generic_atomic64_sub(i, v);
}
# 2909 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_sub_return(s64 i, atomic64_t *v)
{

 return generic_atomic64_sub_return(i, v);
# 2923 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 2936 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_sub_return(i, v);



}
# 2963 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_sub_return(i, v);



}
# 2989 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_sub_return(i, v);



}
# 3012 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{

 return generic_atomic64_fetch_sub(i, v);
# 3026 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 3039 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_fetch_sub(i, v);



}
# 3066 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_fetch_sub(i, v);



}
# 3092 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_fetch_sub(i, v);



}
# 3114 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_inc(atomic64_t *v)
{



 raw_atomic64_add(1, v);

}
# 3134 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_inc_return(atomic64_t *v)
{
# 3146 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return(1, v);

}
# 3160 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_inc_return_acquire(atomic64_t *v)
{
# 3172 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return_acquire(1, v);

}
# 3186 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_inc_return_release(atomic64_t *v)
{
# 3197 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return_release(1, v);

}
# 3211 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_inc_return_relaxed(atomic64_t *v)
{





 return raw_atomic64_add_return_relaxed(1, v);

}
# 3233 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_inc(atomic64_t *v)
{
# 3245 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_add(1, v);

}
# 3259 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
# 3271 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_add_acquire(1, v);

}
# 3285 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_inc_release(atomic64_t *v)
{
# 3296 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_add_release(1, v);

}
# 3310 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{





 return raw_atomic64_fetch_add_relaxed(1, v);

}
# 3332 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_dec(atomic64_t *v)
{



 raw_atomic64_sub(1, v);

}
# 3352 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_dec_return(atomic64_t *v)
{
# 3364 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_sub_return(1, v);

}
# 3378 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_dec_return_acquire(atomic64_t *v)
{
# 3390 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_sub_return_acquire(1, v);

}
# 3404 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_dec_return_release(atomic64_t *v)
{
# 3415 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_sub_return_release(1, v);

}
# 3429 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_dec_return_relaxed(atomic64_t *v)
{





 return raw_atomic64_sub_return_relaxed(1, v);

}
# 3451 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_dec(atomic64_t *v)
{
# 3463 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_sub(1, v);

}
# 3477 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
# 3489 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_sub_acquire(1, v);

}
# 3503 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_dec_release(atomic64_t *v)
{
# 3514 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_sub_release(1, v);

}
# 3528 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{





 return raw_atomic64_fetch_sub_relaxed(1, v);

}
# 3551 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_and(s64 i, atomic64_t *v)
{
 generic_atomic64_and(i, v);
}
# 3568 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{

 return generic_atomic64_fetch_and(i, v);
# 3582 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 3595 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_fetch_and(i, v);



}
# 3622 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_fetch_and(i, v);



}
# 3648 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_fetch_and(i, v);



}
# 3671 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_andnot(s64 i, atomic64_t *v)
{



 raw_atomic64_and(~i, v);

}
# 3692 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
# 3704 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_and(~i, v);

}
# 3719 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
# 3731 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_and_acquire(~i, v);

}
# 3746 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
# 3757 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_fetch_and_release(~i, v);

}
# 3772 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{





 return raw_atomic64_fetch_and_relaxed(~i, v);

}
# 3795 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_or(s64 i, atomic64_t *v)
{
 generic_atomic64_or(i, v);
}
# 3812 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{

 return generic_atomic64_fetch_or(i, v);
# 3826 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 3839 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_fetch_or(i, v);



}
# 3866 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_fetch_or(i, v);



}
# 3892 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_fetch_or(i, v);



}
# 3915 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic64_xor(s64 i, atomic64_t *v)
{
 generic_atomic64_xor(i, v);
}
# 3932 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{

 return generic_atomic64_fetch_xor(i, v);
# 3946 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 3959 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{







 return generic_atomic64_fetch_xor(i, v);



}
# 3986 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{






 return generic_atomic64_fetch_xor(i, v);



}
# 4012 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{



 return generic_atomic64_fetch_xor(i, v);



}
# 4035 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_xchg(atomic64_t *v, s64 new)
{

 return generic_atomic64_xchg(v, new);
# 4049 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 4062 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{







 return generic_atomic64_xchg(v, new);



}
# 4089 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{






 return generic_atomic64_xchg(v, new);



}
# 4115 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{



 return generic_atomic64_xchg(v, new);



}
# 4140 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{

 return generic_atomic64_cmpxchg(v, old, new);
# 4154 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 4169 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{







 return generic_atomic64_cmpxchg(v, old, new);



}
# 4198 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{






 return generic_atomic64_cmpxchg(v, old, new);



}
# 4226 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{



 return generic_atomic64_cmpxchg(v, old, new);



}
# 4252 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
# 4264 "../include/linux/atomic/atomic-arch-fallback.h"
 s64 r, o = *old;
 r = raw_atomic64_cmpxchg(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 4286 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
# 4298 "../include/linux/atomic/atomic-arch-fallback.h"
 s64 r, o = *old;
 r = raw_atomic64_cmpxchg_acquire(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 4320 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
# 4331 "../include/linux/atomic/atomic-arch-fallback.h"
 s64 r, o = *old;
 r = raw_atomic64_cmpxchg_release(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 4353 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{





 s64 r, o = *old;
 r = raw_atomic64_cmpxchg_relaxed(v, o, new);
 if (__builtin_expect(!!(r != o), 0))
  *old = r;
 return __builtin_expect(!!(r == o), 1);

}
# 4380 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{



 return raw_atomic64_sub_return(i, v) == 0;

}
# 4400 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_dec_and_test(atomic64_t *v)
{



 return raw_atomic64_dec_return(v) == 0;

}
# 4420 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_inc_and_test(atomic64_t *v)
{



 return raw_atomic64_inc_return(v) == 0;

}
# 4441 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
# 4453 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return(i, v) < 0;

}
# 4468 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
# 4480 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return_acquire(i, v) < 0;

}
# 4495 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
# 4506 "../include/linux/atomic/atomic-arch-fallback.h"
 return raw_atomic64_add_return_release(i, v) < 0;

}
# 4521 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{





 return raw_atomic64_add_return_relaxed(i, v) < 0;

}
# 4546 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{

 return generic_atomic64_fetch_add_unless(v, a, u);
# 4561 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 4576 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{



 return raw_atomic64_fetch_add_unless(v, a, u) != u;

}
# 4597 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_inc_not_zero(atomic64_t *v)
{



 return raw_atomic64_add_unless(v, 1, 0);

}
# 4618 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_inc_unless_negative(atomic64_t *v)
{



 s64 c = raw_atomic64_read(v);

 do {
  if (__builtin_expect(!!(c < 0), 0))
   return false;
 } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));

 return true;

}
# 4646 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic64_dec_unless_positive(atomic64_t *v)
{



 s64 c = raw_atomic64_read(v);

 do {
  if (__builtin_expect(!!(c > 0), 0))
   return false;
 } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));

 return true;

}
# 4674 "../include/linux/atomic/atomic-arch-fallback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
raw_atomic64_dec_if_positive(atomic64_t *v)
{

 return generic_atomic64_dec_if_positive(v);
# 4690 "../include/linux/atomic/atomic-arch-fallback.h"
}
# 81 "../include/linux/atomic.h" 2
# 1 "../include/linux/atomic/atomic-long.h" 1
# 10 "../include/linux/atomic/atomic-long.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 11 "../include/linux/atomic/atomic-long.h" 2







typedef atomic_t atomic_long_t;
# 34 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_read(const atomic_long_t *v)
{



 return raw_atomic_read(v);

}
# 54 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_read_acquire(const atomic_long_t *v)
{



 return raw_atomic_read_acquire(v);

}
# 75 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_set(atomic_long_t *v, long i)
{



 raw_atomic_set(v, i);

}
# 96 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_set_release(atomic_long_t *v, long i)
{



 raw_atomic_set_release(v, i);

}
# 117 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_add(long i, atomic_long_t *v)
{



 raw_atomic_add(i, v);

}
# 138 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_add_return(long i, atomic_long_t *v)
{



 return raw_atomic_add_return(i, v);

}
# 159 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_add_return_acquire(i, v);

}
# 180 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{



 return raw_atomic_add_return_release(i, v);

}
# 201 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_add_return_relaxed(i, v);

}
# 222 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_add(i, v);

}
# 243 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_add_acquire(i, v);

}
# 264 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_add_release(i, v);

}
# 285 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_add_relaxed(i, v);

}
# 306 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_sub(long i, atomic_long_t *v)
{



 raw_atomic_sub(i, v);

}
# 327 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_sub_return(long i, atomic_long_t *v)
{



 return raw_atomic_sub_return(i, v);

}
# 348 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_sub_return_acquire(i, v);

}
# 369 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{



 return raw_atomic_sub_return_release(i, v);

}
# 390 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_sub_return_relaxed(i, v);

}
# 411 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_sub(i, v);

}
# 432 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_sub_acquire(i, v);

}
# 453 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_sub_release(i, v);

}
# 474 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_sub_relaxed(i, v);

}
# 494 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_inc(atomic_long_t *v)
{



 raw_atomic_inc(v);

}
# 514 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_inc_return(atomic_long_t *v)
{



 return raw_atomic_inc_return(v);

}
# 534 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{



 return raw_atomic_inc_return_acquire(v);

}
# 554 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_inc_return_release(atomic_long_t *v)
{



 return raw_atomic_inc_return_release(v);

}
# 574 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{



 return raw_atomic_inc_return_relaxed(v);

}
# 594 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_inc(atomic_long_t *v)
{



 return raw_atomic_fetch_inc(v);

}
# 614 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{



 return raw_atomic_fetch_inc_acquire(v);

}
# 634 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{



 return raw_atomic_fetch_inc_release(v);

}
# 654 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{



 return raw_atomic_fetch_inc_relaxed(v);

}
# 674 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_dec(atomic_long_t *v)
{



 raw_atomic_dec(v);

}
# 694 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_dec_return(atomic_long_t *v)
{



 return raw_atomic_dec_return(v);

}
# 714 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{



 return raw_atomic_dec_return_acquire(v);

}
# 734 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_dec_return_release(atomic_long_t *v)
{



 return raw_atomic_dec_return_release(v);

}
# 754 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{



 return raw_atomic_dec_return_relaxed(v);

}
# 774 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_dec(atomic_long_t *v)
{



 return raw_atomic_fetch_dec(v);

}
# 794 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{



 return raw_atomic_fetch_dec_acquire(v);

}
# 814 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{



 return raw_atomic_fetch_dec_release(v);

}
# 834 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{



 return raw_atomic_fetch_dec_relaxed(v);

}
# 855 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_and(long i, atomic_long_t *v)
{



 raw_atomic_and(i, v);

}
# 876 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_and(i, v);

}
# 897 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_and_acquire(i, v);

}
# 918 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_and_release(i, v);

}
# 939 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_and_relaxed(i, v);

}
# 960 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_andnot(long i, atomic_long_t *v)
{



 raw_atomic_andnot(i, v);

}
# 981 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_andnot(i, v);

}
# 1002 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_andnot_acquire(i, v);

}
# 1023 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_andnot_release(i, v);

}
# 1044 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_andnot_relaxed(i, v);

}
# 1065 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_or(long i, atomic_long_t *v)
{



 raw_atomic_or(i, v);

}
# 1086 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_or(i, v);

}
# 1107 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_or_acquire(i, v);

}
# 1128 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_or_release(i, v);

}
# 1149 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_or_relaxed(i, v);

}
# 1170 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
raw_atomic_long_xor(long i, atomic_long_t *v)
{



 raw_atomic_xor(i, v);

}
# 1191 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_xor(i, v);

}
# 1212 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_xor_acquire(i, v);

}
# 1233 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_xor_release(i, v);

}
# 1254 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_fetch_xor_relaxed(i, v);

}
# 1275 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_xchg(atomic_long_t *v, long new)
{



 return raw_atomic_xchg(v, new);

}
# 1296 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
{



 return raw_atomic_xchg_acquire(v, new);

}
# 1317 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_xchg_release(atomic_long_t *v, long new)
{



 return raw_atomic_xchg_release(v, new);

}
# 1338 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{



 return raw_atomic_xchg_relaxed(v, new);

}
# 1361 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{



 return raw_atomic_cmpxchg(v, old, new);

}
# 1384 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{



 return raw_atomic_cmpxchg_acquire(v, old, new);

}
# 1407 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{



 return raw_atomic_cmpxchg_release(v, old, new);

}
# 1430 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{



 return raw_atomic_cmpxchg_relaxed(v, old, new);

}
# 1454 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{



 return raw_atomic_try_cmpxchg(v, (int *)old, new);

}
# 1478 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{



 return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);

}
# 1502 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{



 return raw_atomic_try_cmpxchg_release(v, (int *)old, new);

}
# 1526 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{



 return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);

}
# 1547 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{



 return raw_atomic_sub_and_test(i, v);

}
# 1567 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_dec_and_test(atomic_long_t *v)
{



 return raw_atomic_dec_and_test(v);

}
# 1587 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_inc_and_test(atomic_long_t *v)
{



 return raw_atomic_inc_and_test(v);

}
# 1608 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_add_negative(long i, atomic_long_t *v)
{



 return raw_atomic_add_negative(i, v);

}
# 1629 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{



 return raw_atomic_add_negative_acquire(i, v);

}
# 1650 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{



 return raw_atomic_add_negative_release(i, v);

}
# 1671 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{



 return raw_atomic_add_negative_relaxed(i, v);

}
# 1694 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{



 return raw_atomic_fetch_add_unless(v, a, u);

}
# 1717 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{



 return raw_atomic_add_unless(v, a, u);

}
# 1738 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_inc_not_zero(atomic_long_t *v)
{



 return raw_atomic_inc_not_zero(v);

}
# 1759 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{



 return raw_atomic_inc_unless_negative(v);

}
# 1780 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{



 return raw_atomic_dec_unless_positive(v);

}
# 1801 "../include/linux/atomic/atomic-long.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
raw_atomic_long_dec_if_positive(atomic_long_t *v)
{



 return raw_atomic_dec_if_positive(v);

}
# 82 "../include/linux/atomic.h" 2
# 1 "../include/linux/atomic/atomic-instrumented.h" 1
# 17 "../include/linux/atomic/atomic-instrumented.h"
# 1 "../include/linux/instrumented.h" 1
# 13 "../include/linux/instrumented.h"
# 1 "../include/linux/kmsan-checks.h" 1
# 77 "../include/linux/kmsan-checks.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_poison_memory(const void *address, size_t size,
           gfp_t flags)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_unpoison_memory(const void *address, size_t size)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_check_memory(const void *address, size_t size)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_copy_to_user(void *to, const void *from,
          size_t to_copy, size_t left)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_memmove(void *to, const void *from, size_t to_copy)
{
}
# 14 "../include/linux/instrumented.h" 2
# 24 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_read(const volatile void *v, size_t size)
{
 kasan_check_read(v, size);
 kcsan_check_access(v, size, 0);
}
# 38 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_write(const volatile void *v, size_t size)
{
 kasan_check_write(v, size);
 kcsan_check_access(v, size, (1 << 0));
}
# 52 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_read_write(const volatile void *v, size_t size)
{
 kasan_check_write(v, size);
 kcsan_check_access(v, size, (1 << 1) | (1 << 0));
}
# 66 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_atomic_read(const volatile void *v, size_t size)
{
 kasan_check_read(v, size);
 kcsan_check_access(v, size, (1 << 2));
}
# 80 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_atomic_write(const volatile void *v, size_t size)
{
 kasan_check_write(v, size);
 kcsan_check_access(v, size, (1 << 2) | (1 << 0));
}
# 94 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_atomic_read_write(const volatile void *v, size_t size)
{
 kasan_check_write(v, size);
 kcsan_check_access(v, size, (1 << 2) | (1 << 0) | (1 << 1));
}
# 109 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
instrument_copy_to_user(void *to, const void *from, unsigned long n)
{
 kasan_check_read(from, n);
 kcsan_check_access(from, n, 0);
 kmsan_copy_to_user(to, from, n, 0);
}
# 126 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
instrument_copy_from_user_before(const void *to, const void *from, unsigned long n)
{
 kasan_check_write(to, n);
 kcsan_check_access(to, n, (1 << 0));
}
# 143 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
instrument_copy_from_user_after(const void *to, const void *from,
    unsigned long n, unsigned long left)
{
 kmsan_unpoison_memory(to, n - left);
}
# 159 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_memcpy_before(void *to, const void *from,
           unsigned long n)
{
 kasan_check_write(to, n);
 kasan_check_read(from, n);
 kcsan_check_access(to, n, (1 << 0));
 kcsan_check_access(from, n, 0);
}
# 178 "../include/linux/instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void instrument_memcpy_after(void *to, const void *from,
          unsigned long n,
          unsigned long left)
{
 kmsan_memmove(to, from, n - left);
}
# 18 "../include/linux/atomic/atomic-instrumented.h" 2
# 29 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_read(const atomic_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic_read(v);
}
# 46 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_read_acquire(const atomic_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic_read_acquire(v);
}
# 64 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_set(atomic_t *v, int i)
{
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic_set(v, i);
}
# 82 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_set_release(atomic_t *v, int i)
{
 do { } while (0);
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic_set_release(v, i);
}
# 101 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_add(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_add(i, v);
}
# 119 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_add_return(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_return(i, v);
}
# 138 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_add_return_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_return_acquire(i, v);
}
# 156 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_add_return_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_return_release(i, v);
}
# 175 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_add_return_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_return_relaxed(i, v);
}
# 193 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_add(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_add(i, v);
}
# 212 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_add_acquire(i, v);
}
# 230 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_add_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_add_release(i, v);
}
# 249 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_add_relaxed(i, v);
}
# 267 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_sub(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_sub(i, v);
}
# 285 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_sub_return(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_sub_return(i, v);
}
# 304 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_sub_return_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_sub_return_acquire(i, v);
}
# 322 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_sub_return_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_sub_return_release(i, v);
}
# 341 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_sub_return_relaxed(i, v);
}
# 359 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_sub(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_sub(i, v);
}
# 378 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_sub_acquire(i, v);
}
# 396 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_sub_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_sub_release(i, v);
}
# 415 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_sub_relaxed(i, v);
}
# 432 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_inc(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_inc(v);
}
# 449 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_inc_return(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_return(v);
}
# 467 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_inc_return_acquire(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_return_acquire(v);
}
# 484 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_inc_return_release(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_return_release(v);
}
# 502 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_inc_return_relaxed(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_return_relaxed(v);
}
# 519 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_inc(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_inc(v);
}
# 537 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_inc_acquire(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_inc_acquire(v);
}
# 554 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_inc_release(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_inc_release(v);
}
# 572 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_inc_relaxed(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_inc_relaxed(v);
}
# 589 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_dec(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_dec(v);
}
# 606 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_dec_return(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_return(v);
}
# 624 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_dec_return_acquire(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_return_acquire(v);
}
# 641 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_dec_return_release(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_return_release(v);
}
# 659 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_dec_return_relaxed(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_return_relaxed(v);
}
# 676 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_dec(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_dec(v);
}
# 694 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_dec_acquire(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_dec_acquire(v);
}
# 711 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_dec_release(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_dec_release(v);
}
# 729 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_dec_relaxed(atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_dec_relaxed(v);
}
# 747 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_and(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_and(i, v);
}
# 765 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_and(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_and(i, v);
}
# 784 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_and_acquire(i, v);
}
# 802 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_and_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_and_release(i, v);
}
# 821 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_and_relaxed(i, v);
}
# 839 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_andnot(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_andnot(i, v);
}
# 857 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_andnot(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_andnot(i, v);
}
# 876 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_andnot_acquire(i, v);
}
# 894 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_andnot_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_andnot_release(i, v);
}
# 913 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_andnot_relaxed(i, v);
}
# 931 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_or(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_or(i, v);
}
# 949 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_or(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_or(i, v);
}
# 968 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_or_acquire(i, v);
}
# 986 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_or_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_or_release(i, v);
}
# 1005 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_or_relaxed(i, v);
}
# 1023 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_xor(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_xor(i, v);
}
# 1041 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_xor(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_xor(i, v);
}
# 1060 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_xor_acquire(i, v);
}
# 1078 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_xor_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_xor_release(i, v);
}
# 1097 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_xor_relaxed(i, v);
}
# 1115 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_xchg(atomic_t *v, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_xchg(v, new);
}
# 1134 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_xchg_acquire(atomic_t *v, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_xchg_acquire(v, new);
}
# 1152 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_xchg_release(atomic_t *v, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_xchg_release(v, new);
}
# 1171 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_xchg_relaxed(atomic_t *v, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_xchg_relaxed(v, new);
}
# 1191 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_cmpxchg(atomic_t *v, int old, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_cmpxchg(v, old, new);
}
# 1212 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_cmpxchg_acquire(v, old, new);
}
# 1232 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_cmpxchg_release(v, old, new);
}
# 1253 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_cmpxchg_relaxed(v, old, new);
}
# 1274 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_try_cmpxchg(v, old, new);
}
# 1297 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_try_cmpxchg_acquire(v, old, new);
}
# 1319 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_try_cmpxchg_release(v, old, new);
}
# 1342 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_try_cmpxchg_relaxed(v, old, new);
}
# 1361 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_sub_and_test(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_sub_and_test(i, v);
}
# 1379 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_dec_and_test(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_and_test(v);
}
# 1397 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_inc_and_test(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_and_test(v);
}
# 1416 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_add_negative(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_negative(i, v);
}
# 1435 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_add_negative_acquire(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_negative_acquire(i, v);
}
# 1453 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_add_negative_release(int i, atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_negative_release(i, v);
}
# 1472 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_add_negative_relaxed(int i, atomic_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_negative_relaxed(i, v);
}
# 1492 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_fetch_add_unless(v, a, u);
}
# 1513 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_add_unless(atomic_t *v, int a, int u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_add_unless(v, a, u);
}
# 1532 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_inc_not_zero(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_not_zero(v);
}
# 1551 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_inc_unless_negative(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_inc_unless_negative(v);
}
# 1570 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_dec_unless_positive(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_unless_positive(v);
}
# 1589 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
atomic_dec_if_positive(atomic_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_dec_if_positive(v);
}
# 1607 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_read(const atomic64_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic64_read(v);
}
# 1624 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_read_acquire(const atomic64_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic64_read_acquire(v);
}
# 1642 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_set(atomic64_t *v, s64 i)
{
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic64_set(v, i);
}
# 1660 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_set_release(atomic64_t *v, s64 i)
{
 do { } while (0);
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic64_set_release(v, i);
}
# 1679 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_add(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_add(i, v);
}
# 1697 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_add_return(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_return(i, v);
}
# 1716 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_return_acquire(i, v);
}
# 1734 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_add_return_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_return_release(i, v);
}
# 1753 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_return_relaxed(i, v);
}
# 1771 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_add(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_add(i, v);
}
# 1790 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_add_acquire(i, v);
}
# 1808 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_add_release(i, v);
}
# 1827 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_add_relaxed(i, v);
}
# 1845 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_sub(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_sub(i, v);
}
# 1863 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_sub_return(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_sub_return(i, v);
}
# 1882 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_sub_return_acquire(i, v);
}
# 1900 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_sub_return_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_sub_return_release(i, v);
}
# 1919 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_sub_return_relaxed(i, v);
}
# 1937 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_sub(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_sub(i, v);
}
# 1956 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_sub_acquire(i, v);
}
# 1974 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_sub_release(i, v);
}
# 1993 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_sub_relaxed(i, v);
}
# 2010 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_inc(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_inc(v);
}
# 2027 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_inc_return(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_return(v);
}
# 2045 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_inc_return_acquire(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_return_acquire(v);
}
# 2062 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_inc_return_release(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_return_release(v);
}
# 2080 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_return_relaxed(v);
}
# 2097 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_inc(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_inc(v);
}
# 2115 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_inc_acquire(v);
}
# 2132 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_inc_release(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_inc_release(v);
}
# 2150 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_inc_relaxed(v);
}
# 2167 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_dec(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_dec(v);
}
# 2184 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_dec_return(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_return(v);
}
# 2202 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_dec_return_acquire(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_return_acquire(v);
}
# 2219 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_dec_return_release(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_return_release(v);
}
# 2237 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_return_relaxed(v);
}
# 2254 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_dec(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_dec(v);
}
# 2272 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_dec_acquire(v);
}
# 2289 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_dec_release(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_dec_release(v);
}
# 2307 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_dec_relaxed(v);
}
# 2325 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_and(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_and(i, v);
}
# 2343 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_and(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_and(i, v);
}
# 2362 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_and_acquire(i, v);
}
# 2380 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_and_release(i, v);
}
# 2399 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_and_relaxed(i, v);
}
# 2417 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_andnot(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_andnot(i, v);
}
# 2435 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_andnot(i, v);
}
# 2454 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_andnot_acquire(i, v);
}
# 2472 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_andnot_release(i, v);
}
# 2491 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_andnot_relaxed(i, v);
}
# 2509 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_or(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_or(i, v);
}
# 2527 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_or(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_or(i, v);
}
# 2546 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_or_acquire(i, v);
}
# 2564 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_or_release(i, v);
}
# 2583 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_or_relaxed(i, v);
}
# 2601 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic64_xor(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic64_xor(i, v);
}
# 2619 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_xor(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_xor(i, v);
}
# 2638 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_xor_acquire(i, v);
}
# 2656 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_xor_release(i, v);
}
# 2675 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_xor_relaxed(i, v);
}
# 2693 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_xchg(atomic64_t *v, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_xchg(v, new);
}
# 2712 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_xchg_acquire(v, new);
}
# 2730 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_xchg_release(atomic64_t *v, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_xchg_release(v, new);
}
# 2749 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_xchg_relaxed(v, new);
}
# 2769 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_cmpxchg(v, old, new);
}
# 2790 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_cmpxchg_acquire(v, old, new);
}
# 2810 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_cmpxchg_release(v, old, new);
}
# 2831 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
# 2852 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic64_try_cmpxchg(v, old, new);
}
# 2875 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic64_try_cmpxchg_acquire(v, old, new);
}
# 2897 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic64_try_cmpxchg_release(v, old, new);
}
# 2920 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
}
# 2939 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_sub_and_test(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_sub_and_test(i, v);
}
# 2957 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_dec_and_test(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_and_test(v);
}
# 2975 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_inc_and_test(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_and_test(v);
}
# 2994 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_add_negative(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_negative(i, v);
}
# 3013 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_negative_acquire(i, v);
}
# 3031 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_add_negative_release(s64 i, atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_negative_release(i, v);
}
# 3050 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_negative_relaxed(i, v);
}
# 3070 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_fetch_add_unless(v, a, u);
}
# 3091 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_add_unless(v, a, u);
}
# 3110 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_inc_not_zero(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_not_zero(v);
}
# 3129 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_inc_unless_negative(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_inc_unless_negative(v);
}
# 3148 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic64_dec_unless_positive(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_unless_positive(v);
}
# 3167 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) s64
atomic64_dec_if_positive(atomic64_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic64_dec_if_positive(v);
}
# 3185 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_read(const atomic_long_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic_long_read(v);
}
# 3202 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_read_acquire(const atomic_long_t *v)
{
 instrument_atomic_read(v, sizeof(*v));
 return raw_atomic_long_read_acquire(v);
}
# 3220 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_set(atomic_long_t *v, long i)
{
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic_long_set(v, i);
}
# 3238 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_set_release(atomic_long_t *v, long i)
{
 do { } while (0);
 instrument_atomic_write(v, sizeof(*v));
 raw_atomic_long_set_release(v, i);
}
# 3257 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_add(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_add(i, v);
}
# 3275 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_add_return(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_return(i, v);
}
# 3294 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_return_acquire(i, v);
}
# 3312 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_add_return_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_return_release(i, v);
}
# 3331 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_return_relaxed(i, v);
}
# 3349 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_add(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_add(i, v);
}
# 3368 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_add_acquire(i, v);
}
# 3386 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_add_release(i, v);
}
# 3405 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_add_relaxed(i, v);
}
# 3423 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_sub(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_sub(i, v);
}
# 3441 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_sub_return(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_sub_return(i, v);
}
# 3460 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_sub_return_acquire(i, v);
}
# 3478 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_sub_return_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_sub_return_release(i, v);
}
# 3497 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_sub_return_relaxed(i, v);
}
# 3515 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_sub(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_sub(i, v);
}
# 3534 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_sub_acquire(i, v);
}
# 3552 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_sub_release(i, v);
}
# 3571 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_sub_relaxed(i, v);
}
# 3588 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_inc(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_inc(v);
}
# 3605 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_inc_return(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_return(v);
}
# 3623 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_return_acquire(v);
}
# 3640 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_inc_return_release(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_return_release(v);
}
# 3658 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_return_relaxed(v);
}
# 3675 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_inc(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_inc(v);
}
# 3693 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_inc_acquire(v);
}
# 3710 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_inc_release(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_inc_release(v);
}
# 3728 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_inc_relaxed(v);
}
# 3745 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_dec(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_dec(v);
}
# 3762 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_dec_return(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_return(v);
}
# 3780 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_return_acquire(v);
}
# 3797 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_dec_return_release(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_return_release(v);
}
# 3815 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_return_relaxed(v);
}
# 3832 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_dec(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_dec(v);
}
# 3850 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_dec_acquire(v);
}
# 3867 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_dec_release(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_dec_release(v);
}
# 3885 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_dec_relaxed(v);
}
# 3903 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_and(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_and(i, v);
}
# 3921 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_and(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_and(i, v);
}
# 3940 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_and_acquire(i, v);
}
# 3958 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_and_release(i, v);
}
# 3977 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_and_relaxed(i, v);
}
# 3995 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_andnot(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_andnot(i, v);
}
# 4013 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_andnot(i, v);
}
# 4032 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_andnot_acquire(i, v);
}
# 4050 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_andnot_release(i, v);
}
# 4069 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_andnot_relaxed(i, v);
}
# 4087 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_or(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_or(i, v);
}
# 4105 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_or(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_or(i, v);
}
# 4124 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_or_acquire(i, v);
}
# 4142 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_or_release(i, v);
}
# 4161 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_or_relaxed(i, v);
}
# 4179 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
atomic_long_xor(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 raw_atomic_long_xor(i, v);
}
# 4197 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_xor(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_xor(i, v);
}
# 4216 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_xor_acquire(i, v);
}
# 4234 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_xor_release(i, v);
}
# 4253 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_xor_relaxed(i, v);
}
# 4271 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_xchg(atomic_long_t *v, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_xchg(v, new);
}
# 4290 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_xchg_acquire(v, new);
}
# 4308 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_xchg_release(atomic_long_t *v, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_xchg_release(v, new);
}
# 4327 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_xchg_relaxed(v, new);
}
# 4347 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_cmpxchg(v, old, new);
}
# 4368 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_cmpxchg_acquire(v, old, new);
}
# 4388 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_cmpxchg_release(v, old, new);
}
# 4409 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_cmpxchg_relaxed(v, old, new);
}
# 4430 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_long_try_cmpxchg(v, old, new);
}
# 4453 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
}
# 4475 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_long_try_cmpxchg_release(v, old, new);
}
# 4498 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
 instrument_atomic_read_write(v, sizeof(*v));
 instrument_atomic_read_write(old, sizeof(*old));
 return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
}
# 4517 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_sub_and_test(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_sub_and_test(i, v);
}
# 4535 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_dec_and_test(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_and_test(v);
}
# 4553 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_inc_and_test(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_and_test(v);
}
# 4572 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_add_negative(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_negative(i, v);
}
# 4591 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_negative_acquire(i, v);
}
# 4609 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_add_negative_release(long i, atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_negative_release(i, v);
}
# 4628 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_negative_relaxed(i, v);
}
# 4648 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_fetch_add_unless(v, a, u);
}
# 4669 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_add_unless(v, a, u);
}
# 4688 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_inc_not_zero(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_not_zero(v);
}
# 4707 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_inc_unless_negative(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_inc_unless_negative(v);
}
# 4726 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool
atomic_long_dec_unless_positive(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_unless_positive(v);
}
# 4745 "../include/linux/atomic/atomic-instrumented.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
atomic_long_dec_if_positive(atomic_long_t *v)
{
 do { } while (0);
 instrument_atomic_read_write(v, sizeof(*v));
 return raw_atomic_long_dec_if_positive(v);
}
# 83 "../include/linux/atomic.h" 2
# 6 "../include/asm-generic/bitops/lock.h" 2

# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 8 "../include/asm-generic/bitops/lock.h" 2
# 18 "../include/asm-generic/bitops/lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p)
{
 long old;
 unsigned long mask = ((((1UL))) << ((nr) % 32));

 p += ((nr) / 32);
 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_4(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*p) == sizeof(char) || sizeof(*p) == sizeof(short) || sizeof(*p) == sizeof(int) || sizeof(*p) == sizeof(long)) || sizeof(*p) == sizeof(long long))) __compiletime_assert_4(); } while (0); (*(const volatile typeof( _Generic((*p), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*p))) *)&(*p)); }) & mask)
  return 1;

 old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
 return !!(old & mask);
}
# 40 "../include/asm-generic/bitops/lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
{
 p += ((nr) / 32);
 raw_atomic_long_fetch_andnot_release(((((1UL))) << ((nr) % 32)), (atomic_long_t *)p);
}
# 58 "../include/asm-generic/bitops/lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
{
 unsigned long old;

 p += ((nr) / 32);
 old = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_5(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*p) == sizeof(char) || sizeof(*p) == sizeof(short) || sizeof(*p) == sizeof(int) || sizeof(*p) == sizeof(long)) || sizeof(*p) == sizeof(long long))) __compiletime_assert_5(); } while (0); (*(const volatile typeof( _Generic((*p), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*p))) *)&(*p)); });
 old &= ~((((1UL))) << ((nr) % 32));
 raw_atomic_long_set_release((atomic_long_t *)p, old);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_xor_unlock_is_negative_byte(unsigned long mask,
  volatile unsigned long *p)
{
 long old;

 old = raw_atomic_long_fetch_xor_release(mask, (atomic_long_t *)p);
 return !!(old & ((((1UL))) << (7)));
}


# 1 "../include/asm-generic/bitops/instrumented-lock.h" 1
# 23 "../include/asm-generic/bitops/instrumented-lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_bit_unlock(long nr, volatile unsigned long *addr)
{
 do { } while (0);
 instrument_atomic_write(addr + ((nr) / 32), sizeof(long));
 arch_clear_bit_unlock(nr, addr);
}
# 39 "../include/asm-generic/bitops/instrumented-lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{
 do { } while (0);
 instrument_write(addr + ((nr) / 32), sizeof(long));
 arch___clear_bit_unlock(nr, addr);
}
# 55 "../include/asm-generic/bitops/instrumented-lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
{
 instrument_atomic_read_write(addr + ((nr) / 32), sizeof(long));
 return arch_test_and_set_bit_lock(nr, addr);
}
# 75 "../include/asm-generic/bitops/instrumented-lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xor_unlock_is_negative_byte(unsigned long mask,
  volatile unsigned long *addr)
{
 do { } while (0);
 instrument_atomic_write(addr, sizeof(long));
 return arch_xor_unlock_is_negative_byte(mask, addr);
}
# 81 "../include/asm-generic/bitops/lock.h" 2
# 294 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "../include/asm-generic/bitops/non-instrumented-non-atomic.h" 1
# 295 "../arch/hexagon/include/asm/bitops.h" 2

# 1 "../include/asm-generic/bitops/fls64.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 6 "../include/asm-generic/bitops/fls64.h" 2
# 19 "../include/asm-generic/bitops/fls64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int fls64(__u64 x)
{
 __u32 h = x >> 32;
 if (h)
  return fls(h) + 32;
 return fls(x);
}
# 297 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "../include/asm-generic/bitops/sched.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 7 "../include/asm-generic/bitops/sched.h" 2






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sched_find_first_bit(const unsigned long *b)
{





 if (b[0])
  return __ffs(b[0]);
 if (b[1])
  return __ffs(b[1]) + 32;
 if (b[2])
  return __ffs(b[2]) + 64;
 return __ffs(b[3]) + 96;



}
# 298 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "../include/asm-generic/bitops/hweight.h" 1




# 1 "../include/asm-generic/bitops/arch_hweight.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 6 "../include/asm-generic/bitops/arch_hweight.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __arch_hweight32(unsigned int w)
{
 return __sw_hweight32(w);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __arch_hweight16(unsigned int w)
{
 return __sw_hweight16(w);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __arch_hweight8(unsigned int w)
{
 return __sw_hweight8(w);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __arch_hweight64(__u64 w)
{
 return __sw_hweight64(w);
}
# 6 "../include/asm-generic/bitops/hweight.h" 2
# 1 "../include/asm-generic/bitops/const_hweight.h" 1
# 7 "../include/asm-generic/bitops/hweight.h" 2
# 299 "../arch/hexagon/include/asm/bitops.h" 2

# 1 "../include/asm-generic/bitops/le.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 6 "../include/asm-generic/bitops/le.h" 2
# 19 "../include/asm-generic/bitops/le.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_bit_le(int nr, const void *addr)
{
 return ((__builtin_constant_p(nr ^ 0) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? const_test_bit(nr ^ 0, addr) : arch_test_bit(nr ^ 0, addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_bit_le(int nr, void *addr)
{
 set_bit(nr ^ 0, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_bit_le(int nr, void *addr)
{
 clear_bit(nr ^ 0, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __set_bit_le(int nr, void *addr)
{
 ((__builtin_constant_p(nr ^ 0) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? generic___set_bit(nr ^ 0, addr) : arch___set_bit(nr ^ 0, addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __clear_bit_le(int nr, void *addr)
{
 ((__builtin_constant_p(nr ^ 0) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? generic___clear_bit(nr ^ 0, addr) : arch___clear_bit(nr ^ 0, addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_set_bit_le(int nr, void *addr)
{
 return test_and_set_bit(nr ^ 0, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_clear_bit_le(int nr, void *addr)
{
 return test_and_clear_bit(nr ^ 0, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __test_and_set_bit_le(int nr, void *addr)
{
 return ((__builtin_constant_p(nr ^ 0) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? generic___test_and_set_bit(nr ^ 0, addr) : arch___test_and_set_bit(nr ^ 0, addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __test_and_clear_bit_le(int nr, void *addr)
{
 return ((__builtin_constant_p(nr ^ 0) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? generic___test_and_clear_bit(nr ^ 0, addr) : arch___test_and_clear_bit(nr ^ 0, addr));
}
# 301 "../arch/hexagon/include/asm/bitops.h" 2
# 1 "../include/asm-generic/bitops/ext2-atomic.h" 1
# 302 "../arch/hexagon/include/asm/bitops.h" 2
# 69 "../include/linux/bitops.h" 2







_Static_assert(__builtin_types_compatible_p(typeof(arch___set_bit), typeof(generic___set_bit)) && __builtin_types_compatible_p(typeof(generic___set_bit), typeof(generic___set_bit)) && __builtin_types_compatible_p(typeof(arch___set_bit), typeof(generic___set_bit)), "__same_type(arch___set_bit, generic___set_bit) && __same_type(const___set_bit, generic___set_bit) && __same_type(___set_bit, generic___set_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch___clear_bit), typeof(generic___clear_bit)) && __builtin_types_compatible_p(typeof(generic___clear_bit), typeof(generic___clear_bit)) && __builtin_types_compatible_p(typeof(arch___clear_bit), typeof(generic___clear_bit)), "__same_type(arch___clear_bit, generic___clear_bit) && __same_type(const___clear_bit, generic___clear_bit) && __same_type(___clear_bit, generic___clear_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch___change_bit), typeof(generic___change_bit)) && __builtin_types_compatible_p(typeof(generic___change_bit), typeof(generic___change_bit)) && __builtin_types_compatible_p(typeof(arch___change_bit), typeof(generic___change_bit)), "__same_type(arch___change_bit, generic___change_bit) && __same_type(const___change_bit, generic___change_bit) && __same_type(___change_bit, generic___change_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch___test_and_set_bit), typeof(generic___test_and_set_bit)) && __builtin_types_compatible_p(typeof(generic___test_and_set_bit), typeof(generic___test_and_set_bit)) && __builtin_types_compatible_p(typeof(arch___test_and_set_bit), typeof(generic___test_and_set_bit)), "__same_type(arch___test_and_set_bit, generic___test_and_set_bit) && __same_type(const___test_and_set_bit, generic___test_and_set_bit) && __same_type(___test_and_set_bit, generic___test_and_set_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch___test_and_clear_bit), typeof(generic___test_and_clear_bit)) && __builtin_types_compatible_p(typeof(generic___test_and_clear_bit), typeof(generic___test_and_clear_bit)) && __builtin_types_compatible_p(typeof(arch___test_and_clear_bit), typeof(generic___test_and_clear_bit)), "__same_type(arch___test_and_clear_bit, generic___test_and_clear_bit) && __same_type(const___test_and_clear_bit, generic___test_and_clear_bit) && __same_type(___test_and_clear_bit, generic___test_and_clear_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch___test_and_change_bit), typeof(generic___test_and_change_bit)) && __builtin_types_compatible_p(typeof(generic___test_and_change_bit), typeof(generic___test_and_change_bit)) && __builtin_types_compatible_p(typeof(arch___test_and_change_bit), typeof(generic___test_and_change_bit)), "__same_type(arch___test_and_change_bit, generic___test_and_change_bit) && __same_type(const___test_and_change_bit, generic___test_and_change_bit) && __same_type(___test_and_change_bit, generic___test_and_change_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch_test_bit), typeof(generic_test_bit)) && __builtin_types_compatible_p(typeof(const_test_bit), typeof(generic_test_bit)) && __builtin_types_compatible_p(typeof(arch_test_bit), typeof(generic_test_bit)), "__same_type(arch_test_bit, generic_test_bit) && __same_type(const_test_bit, generic_test_bit) && __same_type(_test_bit, generic_test_bit)");
_Static_assert(__builtin_types_compatible_p(typeof(arch_test_bit_acquire), typeof(generic_test_bit_acquire)) && __builtin_types_compatible_p(typeof(generic_test_bit_acquire), typeof(generic_test_bit_acquire)) && __builtin_types_compatible_p(typeof(arch_test_bit_acquire), typeof(generic_test_bit_acquire)), "__same_type(arch_test_bit_acquire, generic_test_bit_acquire) && __same_type(const_test_bit_acquire, generic_test_bit_acquire) && __same_type(_test_bit_acquire, generic_test_bit_acquire)");



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_bitmask_order(unsigned int count)
{
 int order;

 order = fls(count);
 return order;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long hweight_long(unsigned long w)
{
 return sizeof(w) == 4 ? (__builtin_constant_p(w) ? ((((unsigned int) ((!!((w) & (1ULL << 0))) + (!!((w) & (1ULL << 1))) + (!!((w) & (1ULL << 2))) + (!!((w) & (1ULL << 3))) + (!!((w) & (1ULL << 4))) + (!!((w) & (1ULL << 5))) + (!!((w) & (1ULL << 6))) + (!!((w) & (1ULL << 7))))) + ((unsigned int) ((!!(((w) >> 8) & (1ULL << 0))) + (!!(((w) >> 8) & (1ULL << 1))) + (!!(((w) >> 8) & (1ULL << 2))) + (!!(((w) >> 8) & (1ULL << 3))) + (!!(((w) >> 8) & (1ULL << 4))) + (!!(((w) >> 8) & (1ULL << 5))) + (!!(((w) >> 8) & (1ULL << 6))) + (!!(((w) >> 8) & (1ULL << 7)))))) + (((unsigned int) ((!!(((w) >> 16) & (1ULL << 0))) + (!!(((w) >> 16) & (1ULL << 1))) + (!!(((w) >> 16) & (1ULL << 2))) + (!!(((w) >> 16) & (1ULL << 3))) + (!!(((w) >> 16) & (1ULL << 4))) + (!!(((w) >> 16) & (1ULL << 5))) + (!!(((w) >> 16) & (1ULL << 6))) + (!!(((w) >> 16) & (1ULL << 7))))) + ((unsigned int) ((!!((((w) >> 16) >> 8) & (1ULL << 0))) + (!!((((w) >> 16) >> 8) & (1ULL << 1))) + (!!((((w) >> 16) >> 8) & (1ULL << 2))) + (!!((((w) >> 16) >> 8) & (1ULL << 3))) + (!!((((w) >> 16) >> 8) & (1ULL << 4))) + (!!((((w) >> 16) >> 8) & (1ULL << 5))) + (!!((((w) >> 16) >> 8) & (1ULL << 6))) + (!!((((w) >> 16) >> 8) & (1ULL << 7))))))) : __arch_hweight32(w)) : (__builtin_constant_p((__u64)w) ? (((((unsigned int) ((!!(((__u64)w) & (1ULL << 0))) + (!!(((__u64)w) & (1ULL << 1))) + (!!(((__u64)w) & (1ULL << 2))) + (!!(((__u64)w) & (1ULL << 3))) + (!!(((__u64)w) & (1ULL << 4))) + (!!(((__u64)w) & (1ULL << 5))) + (!!(((__u64)w) & (1ULL << 6))) + (!!(((__u64)w) & (1ULL << 7))))) + ((unsigned int) ((!!((((__u64)w) >> 8) & (1ULL << 0))) + (!!((((__u64)w) >> 8) & (1ULL << 1))) + (!!((((__u64)w) >> 8) & (1ULL << 2))) + (!!((((__u64)w) >> 8) & (1ULL << 3))) + (!!((((__u64)w) >> 8) & (1ULL << 4))) + (!!((((__u64)w) >> 8) & (1ULL << 5))) + (!!((((__u64)w) >> 8) & (1ULL << 6))) + (!!((((__u64)w) >> 8) & (1ULL << 7)))))) + (((unsigned int) ((!!((((__u64)w) >> 16) & (1ULL << 0))) + (!!((((__u64)w) >> 16) & (1ULL << 1))) + (!!((((__u64)w) >> 16) & (1ULL << 2))) + (!!((((__u64)w) >> 16) & (1ULL << 3))) + (!!((((__u64)w) >> 16) & (1ULL << 4))) + (!!((((__u64)w) >> 16) & (1ULL << 5))) + (!!((((__u64)w) >> 16) & (1ULL << 6))) + (!!((((__u64)w) >> 16) & (1ULL << 7))))) + ((unsigned int) ((!!(((((__u64)w) >> 16) >> 8) & (1ULL << 0))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 1))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 2))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 3))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 4))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 5))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 6))) + (!!(((((__u64)w) >> 16) >> 8) & (1ULL << 7))))))) + ((((unsigned int) ((!!((((__u64)w) >> 32) & (1ULL << 0))) + (!!((((__u64)w) >> 32) & (1ULL << 1))) + (!!((((__u64)w) >> 32) & (1ULL << 2))) + (!!((((__u64)w) >> 32) & (1ULL << 3))) + (!!((((__u64)w) >> 32) & (1ULL << 4))) + (!!((((__u64)w) >> 32) & (1ULL << 5))) + (!!((((__u64)w) >> 32) & (1ULL << 6))) + (!!((((__u64)w) >> 32) & (1ULL << 7))))) + ((unsigned int) ((!!(((((__u64)w) >> 32) >> 8) & (1ULL << 0))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 1))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 2))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 3))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 4))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 5))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 6))) + (!!(((((__u64)w) >> 32) >> 8) & (1ULL << 7)))))) + (((unsigned int) ((!!(((((__u64)w) >> 32) >> 16) & (1ULL << 0))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 1))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 2))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 3))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 4))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 5))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 6))) + (!!(((((__u64)w) >> 32) >> 16) & (1ULL << 7))))) + ((unsigned int) ((!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 0))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 1))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 2))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 3))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 4))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 5))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 6))) + (!!((((((__u64)w) >> 32) >> 16) >> 8) & (1ULL << 7)))))))) : __arch_hweight64((__u64)w));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u64 rol64(__u64 word, unsigned int shift)
{
 return (word << (shift & 63)) | (word >> ((-shift) & 63));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u64 ror64(__u64 word, unsigned int shift)
{
 return (word >> (shift & 63)) | (word << ((-shift) & 63));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 rol32(__u32 word, unsigned int shift)
{
 return (word << (shift & 31)) | (word >> ((-shift) & 31));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 ror32(__u32 word, unsigned int shift)
{
 return (word >> (shift & 31)) | (word << ((-shift) & 31));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u16 rol16(__u16 word, unsigned int shift)
{
 return (word << (shift & 15)) | (word >> ((-shift) & 15));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u16 ror16(__u16 word, unsigned int shift)
{
 return (word >> (shift & 15)) | (word << ((-shift) & 15));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 rol8(__u8 word, unsigned int shift)
{
 return (word << (shift & 7)) | (word >> ((-shift) & 7));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 ror8(__u8 word, unsigned int shift)
{
 return (word >> (shift & 7)) | (word << ((-shift) & 7));
}
# 187 "../include/linux/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __s32 sign_extend32(__u32 value, int index)
{
 __u8 shift = 31 - index;
 return (__s32)(value << shift) >> shift;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __s64 sign_extend64(__u64 value, int index)
{
 __u8 shift = 63 - index;
 return (__s64)(value << shift) >> shift;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int fls_long(unsigned long l)
{
 if (sizeof(l) == 4)
  return fls(l);
 return fls64(l);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_count_order(unsigned int count)
{
 if (count == 0)
  return -1;

 return fls(--count);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_count_order_long(unsigned long l)
{
 if (l == 0UL)
  return -1;
 return (int)fls_long(--l);
}
# 240 "../include/linux/bitops.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __ffs64(u64 word)
{

 if (((u32)word) == 0UL)
  return __ffs((u32)(word >> 32)) + 32;



 return __ffs((unsigned long)word);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int fns(unsigned long word, unsigned int n)
{
 while (word && n--)
  word &= word - 1;

 return word ? __ffs(word) : 32;
}
# 9 "../include/linux/bitmap.h" 2
# 1 "../include/linux/cleanup.h" 1
# 74 "../include/linux/cleanup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
const volatile void * __must_check_fn(const volatile void *val)
{ return val; }
# 10 "../include/linux/bitmap.h" 2
# 1 "../include/linux/errno.h" 1




# 1 "../include/uapi/linux/errno.h" 1
# 1 "./arch/hexagon/include/generated/uapi/asm/errno.h" 1
# 1 "../include/uapi/asm-generic/errno.h" 1




# 1 "../include/uapi/asm-generic/errno-base.h" 1
# 6 "../include/uapi/asm-generic/errno.h" 2
# 2 "./arch/hexagon/include/generated/uapi/asm/errno.h" 2
# 2 "../include/uapi/linux/errno.h" 2
# 6 "../include/linux/errno.h" 2
# 11 "../include/linux/bitmap.h" 2
# 1 "../include/linux/find.h" 1
# 11 "../include/linux/find.h"
unsigned long _find_next_bit(const unsigned long *addr1, unsigned long nbits,
    unsigned long start);
unsigned long _find_next_and_bit(const unsigned long *addr1, const unsigned long *addr2,
     unsigned long nbits, unsigned long start);
unsigned long _find_next_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
     unsigned long nbits, unsigned long start);
unsigned long _find_next_or_bit(const unsigned long *addr1, const unsigned long *addr2,
     unsigned long nbits, unsigned long start);
unsigned long _find_next_zero_bit(const unsigned long *addr, unsigned long nbits,
      unsigned long start);
extern unsigned long _find_first_bit(const unsigned long *addr, unsigned long size);
unsigned long __find_nth_bit(const unsigned long *addr, unsigned long size, unsigned long n);
unsigned long __find_nth_and_bit(const unsigned long *addr1, const unsigned long *addr2,
    unsigned long size, unsigned long n);
unsigned long __find_nth_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
     unsigned long size, unsigned long n);
unsigned long __find_nth_and_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
     const unsigned long *addr3, unsigned long size,
     unsigned long n);
extern unsigned long _find_first_and_bit(const unsigned long *addr1,
      const unsigned long *addr2, unsigned long size);
unsigned long _find_first_and_and_bit(const unsigned long *addr1, const unsigned long *addr2,
          const unsigned long *addr3, unsigned long size);
extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);
# 55 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_bit(const unsigned long *addr, unsigned long size,
       unsigned long offset)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val;

  if (__builtin_expect(!!(offset >= size), 0))
   return size;

  val = *addr & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((offset) > (size - 1)) * 0l)) : (int *)8))), (offset) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (offset)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));
  return val ? __ffs(val) : size;
 }

 return _find_next_bit(addr, size, offset);
}
# 84 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_and_bit(const unsigned long *addr1,
  const unsigned long *addr2, unsigned long size,
  unsigned long offset)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val;

  if (__builtin_expect(!!(offset >= size), 0))
   return size;

  val = *addr1 & *addr2 & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((offset) > (size - 1)) * 0l)) : (int *)8))), (offset) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (offset)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));
  return val ? __ffs(val) : size;
 }

 return _find_next_and_bit(addr1, addr2, size, offset);
}
# 115 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_andnot_bit(const unsigned long *addr1,
  const unsigned long *addr2, unsigned long size,
  unsigned long offset)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val;

  if (__builtin_expect(!!(offset >= size), 0))
   return size;

  val = *addr1 & ~*addr2 & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((offset) > (size - 1)) * 0l)) : (int *)8))), (offset) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (offset)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));
  return val ? __ffs(val) : size;
 }

 return _find_next_andnot_bit(addr1, addr2, size, offset);
}
# 145 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_or_bit(const unsigned long *addr1,
  const unsigned long *addr2, unsigned long size,
  unsigned long offset)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val;

  if (__builtin_expect(!!(offset >= size), 0))
   return size;

  val = (*addr1 | *addr2) & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((offset) > (size - 1)) * 0l)) : (int *)8))), (offset) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (offset)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));
  return val ? __ffs(val) : size;
 }

 return _find_next_or_bit(addr1, addr2, size, offset);
}
# 174 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
     unsigned long offset)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val;

  if (__builtin_expect(!!(offset >= size), 0))
   return size;

  val = *addr | ~((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((offset) > (size - 1)) * 0l)) : (int *)8))), (offset) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (offset)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));
  return val == ~0UL ? size : ffz(val);
 }

 return _find_next_zero_bit(addr, size, offset);
}
# 201 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_first_bit(const unsigned long *addr, unsigned long size)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? __ffs(val) : size;
 }

 return _find_first_bit(addr, size);
}
# 227 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_nth_bit(const unsigned long *addr, unsigned long size, unsigned long n)
{
 if (n >= size)
  return size;

 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? fns(val, n) : size;
 }

 return __find_nth_bit(addr, size, n);
}
# 252 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_nth_and_bit(const unsigned long *addr1, const unsigned long *addr2,
    unsigned long size, unsigned long n)
{
 if (n >= size)
  return size;

 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr1 & *addr2 & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? fns(val, n) : size;
 }

 return __find_nth_and_bit(addr1, addr2, size, n);
}
# 279 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_nth_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
    unsigned long size, unsigned long n)
{
 if (n >= size)
  return size;

 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr1 & (~*addr2) & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? fns(val, n) : size;
 }

 return __find_nth_andnot_bit(addr1, addr2, size, n);
}
# 307 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long find_nth_and_andnot_bit(const unsigned long *addr1,
     const unsigned long *addr2,
     const unsigned long *addr3,
     unsigned long size, unsigned long n)
{
 if (n >= size)
  return size;

 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr1 & *addr2 & (~*addr3) & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? fns(val, n) : size;
 }

 return __find_nth_and_andnot_bit(addr1, addr2, addr3, size, n);
}
# 335 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_first_and_bit(const unsigned long *addr1,
     const unsigned long *addr2,
     unsigned long size)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr1 & *addr2 & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? __ffs(val) : size;
 }

 return _find_first_and_bit(addr1, addr2, size);
}
# 360 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_first_and_and_bit(const unsigned long *addr1,
         const unsigned long *addr2,
         const unsigned long *addr3,
         unsigned long size)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr1 & *addr2 & *addr3 & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? __ffs(val) : size;
 }

 return _find_first_and_and_bit(addr1, addr2, addr3, size);
}
# 384 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr | ~((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val == ~0UL ? size : ffz(val);
 }

 return _find_first_zero_bit(addr, size);
}
# 405 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_last_bit(const unsigned long *addr, unsigned long size)
{
 if ((__builtin_constant_p(size) && (size) <= 32 && (size) > 0)) {
  unsigned long val = *addr & ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (size - 1)) * 0l)) : (int *)8))), (0) > (size - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (size - 1)))));

  return val ? __fls(val) : size;
 }

 return _find_last_bit(addr, size);
}
# 428 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_and_bit_wrap(const unsigned long *addr1,
     const unsigned long *addr2,
     unsigned long size, unsigned long offset)
{
 unsigned long bit = find_next_and_bit(addr1, addr2, size, offset);

 if (bit < size || offset == 0)
  return bit;

 bit = find_first_and_bit(addr1, addr2, offset);
 return bit < offset ? bit : size;
}
# 451 "../include/linux/find.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long find_next_bit_wrap(const unsigned long *addr,
     unsigned long size, unsigned long offset)
{
 unsigned long bit = find_next_bit(addr, size, offset);

 if (bit < size || offset == 0)
  return bit;

 bit = find_first_bit(addr, offset);
 return bit < offset ? bit : size;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size,
     unsigned long start, unsigned long n)
{
 unsigned long bit;


 if (n > start) {

  bit = find_next_bit(bitmap, size, n);
  if (bit < size)
   return bit;


  n = 0;
 }


 bit = find_next_bit(bitmap, start, n);
 return bit < start ? bit : size;
}
# 500 "../include/linux/find.h"
extern unsigned long find_next_clump8(unsigned long *clump,
          const unsigned long *addr,
          unsigned long size, unsigned long offset);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long find_next_zero_bit_le(const void *addr,
  unsigned long size, unsigned long offset)
{
 return find_next_zero_bit(addr, size, offset);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long find_next_bit_le(const void *addr,
  unsigned long size, unsigned long offset)
{
 return find_next_bit(addr, size, offset);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long find_first_zero_bit_le(const void *addr,
  unsigned long size)
{
 return find_first_zero_bit(addr, size);
}
# 12 "../include/linux/bitmap.h" 2
# 1 "../include/linux/limits.h" 1




# 1 "../include/uapi/linux/limits.h" 1
# 6 "../include/linux/limits.h" 2

# 1 "../include/vdso/limits.h" 1
# 8 "../include/linux/limits.h" 2
# 13 "../include/linux/bitmap.h" 2
# 1 "../include/linux/string.h" 1




# 1 "../include/linux/args.h" 1
# 6 "../include/linux/string.h" 2
# 1 "../include/linux/array_size.h" 1
# 7 "../include/linux/string.h" 2



# 1 "../include/linux/err.h" 1







# 1 "./arch/hexagon/include/generated/uapi/asm/errno.h" 1
# 9 "../include/linux/err.h" 2
# 39 "../include/linux/err.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * __attribute__((__warn_unused_result__)) ERR_PTR(long error)
{
 return (void *) error;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __attribute__((__warn_unused_result__)) PTR_ERR( const void *ptr)
{
 return (long) ptr;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) IS_ERR( const void *ptr)
{
 return __builtin_expect(!!((unsigned long)(void *)((unsigned long)ptr) >= (unsigned long)-4095), 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) IS_ERR_OR_NULL( const void *ptr)
{
 return __builtin_expect(!!(!ptr), 0) || __builtin_expect(!!((unsigned long)(void *)((unsigned long)ptr) >= (unsigned long)-4095), 0);
}
# 82 "../include/linux/err.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * __attribute__((__warn_unused_result__)) ERR_CAST( const void *ptr)
{

 return (void *) ptr;
}
# 105 "../include/linux/err.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) PTR_ERR_OR_ZERO( const void *ptr)
{
 if (IS_ERR(ptr))
  return PTR_ERR(ptr);
 else
  return 0;
}
# 11 "../include/linux/string.h" 2

# 1 "../include/linux/overflow.h" 1
# 51 "../include/linux/overflow.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) __must_check_overflow(bool overflow)
{
 return __builtin_expect(!!(overflow), 0);
}
# 266 "../include/linux/overflow.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t __attribute__((__warn_unused_result__)) size_mul(size_t factor1, size_t factor2)
{
 size_t bytes;

 if (__must_check_overflow(__builtin_mul_overflow(factor1, factor2, &bytes)))
  return (~(size_t)0);

 return bytes;
}
# 285 "../include/linux/overflow.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t __attribute__((__warn_unused_result__)) size_add(size_t addend1, size_t addend2)
{
 size_t bytes;

 if (__must_check_overflow(__builtin_add_overflow(addend1, addend2, &bytes)))
  return (~(size_t)0);

 return bytes;
}
# 306 "../include/linux/overflow.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t __attribute__((__warn_unused_result__)) size_sub(size_t minuend, size_t subtrahend)
{
 size_t bytes;

 if (minuend == (~(size_t)0) || subtrahend == (~(size_t)0) ||
     __must_check_overflow(__builtin_sub_overflow(minuend, subtrahend, &bytes)))
  return (~(size_t)0);

 return bytes;
}
# 13 "../include/linux/string.h" 2
# 1 "../include/linux/stdarg.h" 1




typedef __builtin_va_list va_list;
# 14 "../include/linux/string.h" 2
# 1 "../include/uapi/linux/string.h" 1
# 15 "../include/linux/string.h" 2

extern char *strndup_user(const char *, long);
extern void *memdup_user(const void *, size_t) __attribute__((__alloc_size__(2)));
extern void *vmemdup_user(const void *, size_t) __attribute__((__alloc_size__(2)));
extern void *memdup_user_nul(const void *, size_t);
# 30 "../include/linux/string.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(2, 3)))
void *memdup_array_user(const void *src, size_t n, size_t size)
{
 size_t nbytes;

 if (__must_check_overflow(__builtin_mul_overflow(n, size, &nbytes)))
  return ERR_PTR(-75);

 return memdup_user(src, nbytes);
}
# 50 "../include/linux/string.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(2, 3)))
void *vmemdup_array_user(const void *src, size_t n, size_t size)
{
 size_t nbytes;

 if (__must_check_overflow(__builtin_mul_overflow(n, size, &nbytes)))
  return ERR_PTR(-75);

 return vmemdup_user(src, nbytes);
}




# 1 "../arch/hexagon/include/asm/string.h" 1
# 11 "../arch/hexagon/include/asm/string.h"
extern void *memcpy(void *__to, __const__ void *__from, size_t __n);



extern void *memset(void *__to, int c, size_t __n);
# 65 "../include/linux/string.h" 2


extern char * strcpy(char *,const char *);


extern char * strncpy(char *,const char *, __kernel_size_t);

ssize_t sized_strscpy(char *, const char *, size_t);
# 147 "../include/linux/string.h"
extern char * strcat(char *, const char *);


extern char * strncat(char *, const char *, __kernel_size_t);


extern size_t strlcat(char *, const char *, __kernel_size_t);


extern int strcmp(const char *,const char *);


extern int strncmp(const char *,const char *,__kernel_size_t);


extern int strcasecmp(const char *s1, const char *s2);


extern int strncasecmp(const char *s1, const char *s2, size_t n);


extern char * strchr(const char *,int);


extern char * strchrnul(const char *,int);

extern char * strnchrnul(const char *, size_t, int);

extern char * strnchr(const char *, size_t, int);


extern char * strrchr(const char *,int);

extern char * __attribute__((__warn_unused_result__)) skip_spaces(const char *);

extern char *strim(char *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) char *strstrip(char *str)
{
 return strim(str);
}


extern char * strstr(const char *, const char *);


extern char * strnstr(const char *, const char *, size_t);


extern __kernel_size_t strlen(const char *);


extern __kernel_size_t strnlen(const char *,__kernel_size_t);


extern char * strpbrk(const char *,const char *);


extern char * strsep(char **,const char *);


extern __kernel_size_t strspn(const char *,const char *);


extern __kernel_size_t strcspn(const char *,const char *);







extern void *memset16(uint16_t *, uint16_t, __kernel_size_t);



extern void *memset32(uint32_t *, uint32_t, __kernel_size_t);



extern void *memset64(uint64_t *, uint64_t, __kernel_size_t);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *memset_l(unsigned long *p, unsigned long v,
  __kernel_size_t n)
{
 if (32 == 32)
  return memset32((uint32_t *)p, v, n);
 else
  return memset64((uint64_t *)p, v, n);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *memset_p(void **p, void *v, __kernel_size_t n)
{
 if (32 == 32)
  return memset32((uint32_t *)p, (uintptr_t)v, n);
 else
  return memset64((uint64_t *)p, (uintptr_t)v, n);
}

extern void **__memcat_p(void **a, void **b);
# 258 "../include/linux/string.h"
extern void * memmove(void *,const void *,__kernel_size_t);


extern void * memscan(void *,int,__kernel_size_t);


extern int memcmp(const void *,const void *,__kernel_size_t);


extern int bcmp(const void *,const void *,__kernel_size_t);


extern void * memchr(const void *,int,__kernel_size_t);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_flushcache(void *dst, const void *src, size_t cnt)
{
 memcpy(dst, src, cnt);
}


void *memchr_inv(const void *s, int c, size_t n);
char *strreplace(char *str, char old, char new);

extern void kfree_const(const void *x);

extern char *kstrdup(const char *s, gfp_t gfp) __attribute__((__malloc__));
extern const char *kstrdup_const(const char *s, gfp_t gfp);
extern char *kstrndup(const char *s, size_t len, gfp_t gfp);
extern void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp) __attribute__((__alloc_size__(2)));


extern void *kvmemdup(const void *src, size_t len, gfp_t gfp) __attribute__((__alloc_size__(2)));
extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp);
extern void *kmemdup_array(const void *src, size_t count, size_t element_size, gfp_t gfp)
  __attribute__((__alloc_size__(2, 3)));


extern char **argv_split(gfp_t gfp, const char *str, int *argcp);
extern void argv_free(char **argv);


extern int get_option(char **str, int *pint);
extern char *get_options(const char *str, int nints, int *ints);
extern unsigned long long memparse(const char *ptr, char **retptr);
extern bool parse_option_str(const char *str, const char *option);
extern char *next_arg(char *args, char **param, char **val);

extern bool sysfs_streq(const char *s1, const char *s2);
int match_string(const char * const *array, size_t n, const char *string);
int __sysfs_match_string(const char * const *array, size_t n, const char *s);
# 320 "../include/linux/string.h"
int vbin_printf(u32 *bin_buf, size_t size, const char *fmt, va_list args);
int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf);
int bprintf(u32 *bin_buf, size_t size, const char *fmt, ...) __attribute__((__format__(printf, 3, 4)));


extern ssize_t memory_read_from_buffer(void *to, size_t count, loff_t *ppos,
           const void *from, size_t available);

int ptr_to_hashval(const void *ptr, unsigned long *hashval_out);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool strstarts(const char *str, const char *prefix)
{
 return strncmp(str, prefix, strlen(prefix)) == 0;
}

size_t memweight(const void *ptr, size_t bytes);
# 356 "../include/linux/string.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memzero_explicit(void *s, size_t count)
{
 memset(s, 0, count);
 __asm__ __volatile__("": :"r"(s) :"memory");
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *kbasename(const char *path)
{
 const char *tail = strrchr(path, '/');
 return tail ? tail + 1 : path;
}
# 381 "../include/linux/string.h"
void memcpy_and_pad(void *dest, size_t dest_len, const void *src, size_t count,
      int pad);
# 529 "../include/linux/string.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) size_t str_has_prefix(const char *str, const char *prefix)
{
 size_t len = strlen(prefix);
 return strncmp(str, prefix, len) == 0 ? len : 0;
}
# 14 "../include/linux/bitmap.h" 2

# 1 "../include/linux/bitmap-str.h" 1




int bitmap_parse_user(const char *ubuf, unsigned int ulen, unsigned long *dst, int nbits);
int bitmap_print_to_pagebuf(bool list, char *buf, const unsigned long *maskp, int nmaskbits);
extern int bitmap_print_bitmask_to_buf(char *buf, const unsigned long *maskp,
     int nmaskbits, loff_t off, size_t count);
extern int bitmap_print_list_to_buf(char *buf, const unsigned long *maskp,
     int nmaskbits, loff_t off, size_t count);
int bitmap_parse(const char *buf, unsigned int buflen, unsigned long *dst, int nbits);
int bitmap_parselist(const char *buf, unsigned long *maskp, int nmaskbits);
int bitmap_parselist_user(const char *ubuf, unsigned int ulen,
     unsigned long *dst, int nbits);
# 16 "../include/linux/bitmap.h" 2

struct device;
# 132 "../include/linux/bitmap.h"
unsigned long *bitmap_alloc(unsigned int nbits, gfp_t flags);
unsigned long *bitmap_zalloc(unsigned int nbits, gfp_t flags);
unsigned long *bitmap_alloc_node(unsigned int nbits, gfp_t flags, int node);
unsigned long *bitmap_zalloc_node(unsigned int nbits, gfp_t flags, int node);
void bitmap_free(const unsigned long *bitmap);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_bitmap(void *p) { unsigned long * _T = *(unsigned long * *)p; if (_T) bitmap_free(_T); }


unsigned long *devm_bitmap_alloc(struct device *dev,
     unsigned int nbits, gfp_t flags);
unsigned long *devm_bitmap_zalloc(struct device *dev,
      unsigned int nbits, gfp_t flags);





bool __bitmap_equal(const unsigned long *bitmap1,
      const unsigned long *bitmap2, unsigned int nbits);
bool __attribute__((__pure__)) __bitmap_or_equal(const unsigned long *src1,
         const unsigned long *src2,
         const unsigned long *src3,
         unsigned int nbits);
void __bitmap_complement(unsigned long *dst, const unsigned long *src,
    unsigned int nbits);
void __bitmap_shift_right(unsigned long *dst, const unsigned long *src,
     unsigned int shift, unsigned int nbits);
void __bitmap_shift_left(unsigned long *dst, const unsigned long *src,
    unsigned int shift, unsigned int nbits);
void bitmap_cut(unsigned long *dst, const unsigned long *src,
  unsigned int first, unsigned int cut, unsigned int nbits);
bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
   const unsigned long *bitmap2, unsigned int nbits);
void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
   const unsigned long *bitmap2, unsigned int nbits);
void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1,
    const unsigned long *bitmap2, unsigned int nbits);
bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
      const unsigned long *bitmap2, unsigned int nbits);
void __bitmap_replace(unsigned long *dst,
        const unsigned long *old, const unsigned long *new,
        const unsigned long *mask, unsigned int nbits);
bool __bitmap_intersects(const unsigned long *bitmap1,
    const unsigned long *bitmap2, unsigned int nbits);
bool __bitmap_subset(const unsigned long *bitmap1,
       const unsigned long *bitmap2, unsigned int nbits);
unsigned int __bitmap_weight(const unsigned long *bitmap, unsigned int nbits);
unsigned int __bitmap_weight_and(const unsigned long *bitmap1,
     const unsigned long *bitmap2, unsigned int nbits);
unsigned int __bitmap_weight_andnot(const unsigned long *bitmap1,
        const unsigned long *bitmap2, unsigned int nbits);
void __bitmap_set(unsigned long *map, unsigned int start, int len);
void __bitmap_clear(unsigned long *map, unsigned int start, int len);

unsigned long bitmap_find_next_zero_area_off(unsigned long *map,
          unsigned long size,
          unsigned long start,
          unsigned int nr,
          unsigned long align_mask,
          unsigned long align_offset);
# 206 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
bitmap_find_next_zero_area(unsigned long *map,
      unsigned long size,
      unsigned long start,
      unsigned int nr,
      unsigned long align_mask)
{
 return bitmap_find_next_zero_area_off(map, size, start, nr,
           align_mask, 0);
}

void bitmap_remap(unsigned long *dst, const unsigned long *src,
  const unsigned long *old, const unsigned long *new, unsigned int nbits);
int bitmap_bitremap(int oldbit,
  const unsigned long *old, const unsigned long *new, int bits);
void bitmap_onto(unsigned long *dst, const unsigned long *orig,
  const unsigned long *relmap, unsigned int bits);
void bitmap_fold(unsigned long *dst, const unsigned long *orig,
  unsigned int sz, unsigned int nbits);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_zero(unsigned long *dst, unsigned int nbits)
{
 unsigned int len = (((((nbits)) + ((__typeof__((nbits)))((32)) - 1)) & ~((__typeof__((nbits)))((32)) - 1)) / 8);

 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = 0;
 else
  memset(dst, 0, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_fill(unsigned long *dst, unsigned int nbits)
{
 unsigned int len = (((((nbits)) + ((__typeof__((nbits)))((32)) - 1)) & ~((__typeof__((nbits)))((32)) - 1)) / 8);

 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = ~0UL;
 else
  memset(dst, 0xff, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_copy(unsigned long *dst, const unsigned long *src,
   unsigned int nbits)
{
 unsigned int len = (((((nbits)) + ((__typeof__((nbits)))((32)) - 1)) & ~((__typeof__((nbits)))((32)) - 1)) / 8);

 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = *src;
 else
  memcpy(dst, src, len);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_copy_clear_tail(unsigned long *dst,
  const unsigned long *src, unsigned int nbits)
{
 bitmap_copy(dst, src, nbits);
 if (nbits % 32)
  dst[nbits / 32] &= (~0UL >> (-(nbits) & (32 - 1)));
}
# 300 "../include/linux/bitmap.h"
void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits);
void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_and(unsigned long *dst, const unsigned long *src1,
   const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return (*dst = *src1 & *src2 & (~0UL >> (-(nbits) & (32 - 1)))) != 0;
 return __bitmap_and(dst, src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_or(unsigned long *dst, const unsigned long *src1,
   const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = *src1 | *src2;
 else
  __bitmap_or(dst, src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_xor(unsigned long *dst, const unsigned long *src1,
   const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = *src1 ^ *src2;
 else
  __bitmap_xor(dst, src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_andnot(unsigned long *dst, const unsigned long *src1,
   const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return (*dst = *src1 & ~(*src2) & (~0UL >> (-(nbits) & (32 - 1)))) != 0;
 return __bitmap_andnot(dst, src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_complement(unsigned long *dst, const unsigned long *src,
   unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = ~(*src);
 else
  __bitmap_complement(dst, src, nbits);
}
# 359 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_equal(const unsigned long *src1,
    const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return !((*src1 ^ *src2) & (~0UL >> (-(nbits) & (32 - 1))));
 if (__builtin_constant_p(nbits & (8 - 1)) &&
     (((nbits) & ((typeof(nbits))(8) - 1)) == 0))
  return !memcmp(src1, src2, nbits / 8);
 return __bitmap_equal(src1, src2, nbits);
}
# 379 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_or_equal(const unsigned long *src1,
       const unsigned long *src2,
       const unsigned long *src3,
       unsigned int nbits)
{
 if (!(__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return __bitmap_or_equal(src1, src2, src3, nbits);

 return !(((*src1 | *src2) ^ *src3) & (~0UL >> (-(nbits) & (32 - 1))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_intersects(const unsigned long *src1,
         const unsigned long *src2,
         unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return ((*src1 & *src2) & (~0UL >> (-(nbits) & (32 - 1)))) != 0;
 else
  return __bitmap_intersects(src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_subset(const unsigned long *src1,
     const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return ! ((*src1 & ~(*src2)) & (~0UL >> (-(nbits) & (32 - 1))));
 else
  return __bitmap_subset(src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_empty(const unsigned long *src, unsigned nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return ! (*src & (~0UL >> (-(nbits) & (32 - 1))));

 return find_first_bit(src, nbits) == nbits;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bitmap_full(const unsigned long *src, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return ! (~(*src) & (~0UL >> (-(nbits) & (32 - 1))));

 return find_first_zero_bit(src, nbits) == nbits;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned int bitmap_weight(const unsigned long *src, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return hweight_long(*src & (~0UL >> (-(nbits) & (32 - 1))));
 return __bitmap_weight(src, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long bitmap_weight_and(const unsigned long *src1,
    const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return hweight_long(*src1 & *src2 & (~0UL >> (-(nbits) & (32 - 1))));
 return __bitmap_weight_and(src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long bitmap_weight_andnot(const unsigned long *src1,
       const unsigned long *src2, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  return hweight_long(*src1 & ~(*src2) & (~0UL >> (-(nbits) & (32 - 1))));
 return __bitmap_weight_andnot(src1, src2, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void bitmap_set(unsigned long *map, unsigned int start,
  unsigned int nbits)
{
 if (__builtin_constant_p(nbits) && nbits == 1)
  ((__builtin_constant_p(start) && __builtin_constant_p((uintptr_t)(map) != (uintptr_t)((void *)0)) && (uintptr_t)(map) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(map))) ? generic___set_bit(start, map) : arch___set_bit(start, map));
 else if ((__builtin_constant_p(start + nbits) && (start + nbits) <= 32 && (start + nbits) > 0))
  *map |= ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((start) > (start + nbits - 1)) * 0l)) : (int *)8))), (start) > (start + nbits - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (start)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (start + nbits - 1)))));
 else if (__builtin_constant_p(start & (8 - 1)) &&
   (((start) & ((typeof(start))(8) - 1)) == 0) &&
   __builtin_constant_p(nbits & (8 - 1)) &&
   (((nbits) & ((typeof(nbits))(8) - 1)) == 0))
  memset((char *)map + start / 8, 0xff, nbits / 8);
 else
  __bitmap_set(map, start, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void bitmap_clear(unsigned long *map, unsigned int start,
  unsigned int nbits)
{
 if (__builtin_constant_p(nbits) && nbits == 1)
  ((__builtin_constant_p(start) && __builtin_constant_p((uintptr_t)(map) != (uintptr_t)((void *)0)) && (uintptr_t)(map) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(map))) ? generic___clear_bit(start, map) : arch___clear_bit(start, map));
 else if ((__builtin_constant_p(start + nbits) && (start + nbits) <= 32 && (start + nbits) > 0))
  *map &= ~((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((start) > (start + nbits - 1)) * 0l)) : (int *)8))), (start) > (start + nbits - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (start)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (start + nbits - 1)))));
 else if (__builtin_constant_p(start & (8 - 1)) &&
   (((start) & ((typeof(start))(8) - 1)) == 0) &&
   __builtin_constant_p(nbits & (8 - 1)) &&
   (((nbits) & ((typeof(nbits))(8) - 1)) == 0))
  memset((char *)map + start / 8, 0, nbits / 8);
 else
  __bitmap_clear(map, start, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_shift_right(unsigned long *dst, const unsigned long *src,
    unsigned int shift, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = (*src & (~0UL >> (-(nbits) & (32 - 1)))) >> shift;
 else
  __bitmap_shift_right(dst, src, shift, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_shift_left(unsigned long *dst, const unsigned long *src,
    unsigned int shift, unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = (*src << shift) & (~0UL >> (-(nbits) & (32 - 1)));
 else
  __bitmap_shift_left(dst, src, shift, nbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_replace(unsigned long *dst,
      const unsigned long *old,
      const unsigned long *new,
      const unsigned long *mask,
      unsigned int nbits)
{
 if ((__builtin_constant_p(nbits) && (nbits) <= 32 && (nbits) > 0))
  *dst = (*old & ~(*mask)) | (*new & *mask);
 else
  __bitmap_replace(dst, old, new, mask, nbits);
}
# 548 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_scatter(unsigned long *dst, const unsigned long *src,
      const unsigned long *mask, unsigned int nbits)
{
 unsigned int n = 0;
 unsigned int bit;

 bitmap_zero(dst, nbits);

 for ((bit) = 0; (bit) = find_next_bit((mask), (nbits), (bit)), (bit) < (nbits); (bit)++)
  ((((__builtin_constant_p(n++) && __builtin_constant_p((uintptr_t)(src) != (uintptr_t)((void *)0)) && (uintptr_t)(src) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(src))) ? const_test_bit(n++, src) : arch_test_bit(n++, src))) ? ((__builtin_constant_p((bit)) && __builtin_constant_p((uintptr_t)((dst)) != (uintptr_t)((void *)0)) && (uintptr_t)((dst)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((dst)))) ? generic___set_bit((bit), (dst)) : arch___set_bit((bit), (dst))) : ((__builtin_constant_p((bit)) && __builtin_constant_p((uintptr_t)((dst)) != (uintptr_t)((void *)0)) && (uintptr_t)((dst)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((dst)))) ? generic___clear_bit((bit), (dst)) : arch___clear_bit((bit), (dst))));
}
# 602 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_gather(unsigned long *dst, const unsigned long *src,
     const unsigned long *mask, unsigned int nbits)
{
 unsigned int n = 0;
 unsigned int bit;

 bitmap_zero(dst, nbits);

 for ((bit) = 0; (bit) = find_next_bit((mask), (nbits), (bit)), (bit) < (nbits); (bit)++)
  ((((__builtin_constant_p(bit) && __builtin_constant_p((uintptr_t)(src) != (uintptr_t)((void *)0)) && (uintptr_t)(src) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(src))) ? const_test_bit(bit, src) : arch_test_bit(bit, src))) ? ((__builtin_constant_p((n++)) && __builtin_constant_p((uintptr_t)((dst)) != (uintptr_t)((void *)0)) && (uintptr_t)((dst)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((dst)))) ? generic___set_bit((n++), (dst)) : arch___set_bit((n++), (dst))) : ((__builtin_constant_p((n++)) && __builtin_constant_p((uintptr_t)((dst)) != (uintptr_t)((void *)0)) && (uintptr_t)((dst)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((dst)))) ? generic___clear_bit((n++), (dst)) : arch___clear_bit((n++), (dst))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_next_set_region(unsigned long *bitmap,
       unsigned int *rs, unsigned int *re,
       unsigned int end)
{
 *rs = find_next_bit(bitmap, end, *rs);
 *re = find_next_zero_bit(bitmap, end, *rs + 1);
}
# 631 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_release_region(unsigned long *bitmap, unsigned int pos, int order)
{
 bitmap_clear(bitmap, pos, ((((1UL))) << (order)));
}
# 647 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bitmap_allocate_region(unsigned long *bitmap, unsigned int pos, int order)
{
 unsigned int len = ((((1UL))) << (order));

 if (find_next_bit(bitmap, pos + len, pos) < pos + len)
  return -16;
 bitmap_set(bitmap, pos, len);
 return 0;
}
# 671 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bitmap_find_free_region(unsigned long *bitmap, unsigned int bits, int order)
{
 unsigned int pos, end;

 for (pos = 0; (end = pos + ((((1UL))) << (order))) <= bits; pos = end) {
  if (!bitmap_allocate_region(bitmap, pos, order))
   return pos;
 }
 return -12;
}
# 725 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_from_u64(unsigned long *dst, u64 mask)
{
 bitmap_from_arr64(dst, &mask, 64);
}
# 740 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long bitmap_read(const unsigned long *map,
     unsigned long start,
     unsigned long nbits)
{
 size_t index = ((start) / 32);
 unsigned long offset = start % 32;
 unsigned long space = 32 - offset;
 unsigned long value_low, value_high;

 if (__builtin_expect(!!(!nbits || nbits > 32), 0))
  return 0;

 if (space >= nbits)
  return (map[index] >> offset) & (~0UL >> (-(nbits) & (32 - 1)));

 value_low = map[index] & (~0UL << ((start) & (32 - 1)));
 value_high = map[index + 1] & (~0UL >> (-(start + nbits) & (32 - 1)));
 return (value_low >> offset) | (value_high << space);
}
# 775 "../include/linux/bitmap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bitmap_write(unsigned long *map, unsigned long value,
    unsigned long start, unsigned long nbits)
{
 size_t index;
 unsigned long offset;
 unsigned long space;
 unsigned long mask;
 bool fit;

 if (__builtin_expect(!!(!nbits || nbits > 32), 0))
  return;

 mask = (~0UL >> (-(nbits) & (32 - 1)));
 value &= mask;
 offset = start % 32;
 space = 32 - offset;
 fit = space >= nbits;
 index = ((start) / 32);

 map[index] &= (fit ? (~(mask << offset)) : ~(~0UL << ((start) & (32 - 1))));
 map[index] |= value << offset;
 if (fit)
  return;

 map[index + 1] &= (~0UL << ((start + nbits) & (32 - 1)));
 map[index + 1] |= (value >> space);
}
# 13 "../include/linux/xarray.h" 2
# 1 "../include/linux/bug.h" 1




# 1 "./arch/hexagon/include/generated/asm/bug.h" 1
# 1 "../include/asm-generic/bug.h" 1





# 1 "../include/linux/instrumentation.h" 1
# 7 "../include/asm-generic/bug.h" 2
# 1 "../include/linux/once_lite.h" 1
# 8 "../include/asm-generic/bug.h" 2
# 21 "../include/asm-generic/bug.h"
# 1 "../include/linux/panic.h" 1







struct pt_regs;

extern long (*panic_blink)(int state);
__attribute__((__format__(printf, 1, 2)))
void panic(const char *fmt, ...) __attribute__((__noreturn__)) __attribute__((__cold__));
void nmi_panic(struct pt_regs *regs, const char *msg);
void check_panic_on_warn(const char *origin);
extern void oops_enter(void);
extern void oops_exit(void);
extern bool oops_may_print(void);

extern int panic_timeout;
extern unsigned long panic_print;
extern int panic_on_oops;
extern int panic_on_unrecovered_nmi;
extern int panic_on_io_nmi;
extern int panic_on_warn;

extern unsigned long panic_on_taint;
extern bool panic_on_taint_nousertaint;

extern int sysctl_panic_on_rcu_stall;
extern int sysctl_max_rcu_stall_to_panic;
extern int sysctl_panic_on_stackoverflow;

extern bool crash_kexec_post_notifiers;

extern void __stack_chk_fail(void);
void abort(void);






extern atomic_t panic_cpu;






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_arch_panic_timeout(int timeout, int arch_default_timeout)
{
 if (panic_timeout == arch_default_timeout)
  panic_timeout = timeout;
}
# 79 "../include/linux/panic.h"
struct taint_flag {
 char c_true;
 char c_false;
 bool module;
 const char *desc;
};

extern const struct taint_flag taint_flags[19];

enum lockdep_ok {
 LOCKDEP_STILL_OK,
 LOCKDEP_NOW_UNRELIABLE,
};

extern const char *print_tainted(void);
extern const char *print_tainted_verbose(void);
extern void add_taint(unsigned flag, enum lockdep_ok);
extern int test_taint(unsigned flag);
extern unsigned long get_taint(void);
# 22 "../include/asm-generic/bug.h" 2
# 1 "../include/linux/printk.h" 1





# 1 "../include/linux/init.h" 1






# 1 "../include/linux/stringify.h" 1
# 8 "../include/linux/init.h" 2
# 115 "../include/linux/init.h"
typedef int (*initcall_t)(void);
typedef void (*exitcall_t)(void);
# 126 "../include/linux/init.h"
typedef initcall_t initcall_entry_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) initcall_t initcall_from_entry(initcall_entry_t *entry)
{
 return *entry;
}


extern initcall_entry_t __con_initcall_start[], __con_initcall_end[];


typedef void (*ctor_fn_t)(void);

struct file_system_type;


extern int do_one_initcall(initcall_t fn);
extern char __attribute__((__section__(".init.data"))) boot_command_line[];
extern char *saved_command_line;
extern unsigned int saved_command_line_len;
extern unsigned int reset_devices;


void setup_arch(char **);
void prepare_namespace(void);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) init_rootfs(void);

void init_IRQ(void);
void time_init(void);
void poking_init(void);
void pgtable_cache_init(void);

extern initcall_entry_t __initcall_start[];
extern initcall_entry_t __initcall0_start[];
extern initcall_entry_t __initcall1_start[];
extern initcall_entry_t __initcall2_start[];
extern initcall_entry_t __initcall3_start[];
extern initcall_entry_t __initcall4_start[];
extern initcall_entry_t __initcall5_start[];
extern initcall_entry_t __initcall6_start[];
extern initcall_entry_t __initcall7_start[];
extern initcall_entry_t __initcall_end[];

extern struct file_system_type rootfs_fs_type;

extern bool rodata_enabled;
void mark_rodata_ro(void);

extern void (*late_time_init)(void);

extern bool initcall_debug;
# 323 "../include/linux/init.h"
struct obs_kernel_param {
 const char *str;
 int (*setup_func)(char *);
 int early;
};

extern const struct obs_kernel_param __setup_start[], __setup_end[];
# 381 "../include/linux/init.h"
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) parse_early_param(void);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) parse_early_options(char *cmdline);
# 7 "../include/linux/printk.h" 2
# 1 "../include/linux/kern_levels.h" 1
# 8 "../include/linux/printk.h" 2
# 1 "../include/linux/linkage.h" 1






# 1 "../include/linux/export.h" 1





# 1 "../include/linux/linkage.h" 1
# 7 "../include/linux/export.h" 2
# 8 "../include/linux/linkage.h" 2
# 1 "../arch/hexagon/include/asm/linkage.h" 1
# 9 "../include/linux/linkage.h" 2
# 9 "../include/linux/printk.h" 2
# 1 "../include/linux/ratelimit_types.h" 1





# 1 "../include/uapi/linux/param.h" 1




# 1 "../arch/hexagon/include/uapi/asm/param.h" 1
# 25 "../arch/hexagon/include/uapi/asm/param.h"
# 1 "../include/asm-generic/param.h" 1




# 1 "../include/uapi/asm-generic/param.h" 1
# 6 "../include/asm-generic/param.h" 2
# 26 "../arch/hexagon/include/uapi/asm/param.h" 2
# 6 "../include/uapi/linux/param.h" 2
# 7 "../include/linux/ratelimit_types.h" 2
# 1 "../include/linux/spinlock_types_raw.h" 1








# 1 "../include/linux/spinlock_types_up.h" 1
# 17 "../include/linux/spinlock_types_up.h"
typedef struct {
 volatile unsigned int slock;
} arch_spinlock_t;
# 31 "../include/linux/spinlock_types_up.h"
typedef struct {

} arch_rwlock_t;
# 10 "../include/linux/spinlock_types_raw.h" 2


# 1 "../include/linux/lockdep_types.h" 1
# 17 "../include/linux/lockdep_types.h"
enum lockdep_wait_type {
 LD_WAIT_INV = 0,

 LD_WAIT_FREE,
 LD_WAIT_SPIN,




 LD_WAIT_CONFIG = LD_WAIT_SPIN,

 LD_WAIT_SLEEP,

 LD_WAIT_MAX,
};

enum lockdep_lock_type {
 LD_LOCK_NORMAL = 0,
 LD_LOCK_PERCPU,
 LD_LOCK_WAIT_OVERRIDE,
 LD_LOCK_MAX,
};
# 70 "../include/linux/lockdep_types.h"
struct lockdep_subclass_key {
 char __one_byte;
} __attribute__ ((__packed__));


struct lock_class_key {
 union {
  struct hlist_node hash_entry;
  struct lockdep_subclass_key subkeys[8UL];
 };
};

extern struct lock_class_key __lockdep_no_validate__;
extern struct lock_class_key __lockdep_no_track__;

struct lock_trace;



struct lockdep_map;
typedef int (*lock_cmp_fn)(const struct lockdep_map *a,
      const struct lockdep_map *b);
typedef void (*lock_print_fn)(const struct lockdep_map *map);





struct lock_class {



 struct hlist_node hash_entry;






 struct list_head lock_entry;






 struct list_head locks_after, locks_before;

 const struct lockdep_subclass_key *key;
 lock_cmp_fn cmp_fn;
 lock_print_fn print_fn;

 unsigned int subclass;
 unsigned int dep_gen_id;




 unsigned long usage_mask;
 const struct lock_trace *usage_traces[(2*4 + 2)];

 const char *name;




 int name_version;

 u8 wait_type_inner;
 u8 wait_type_outer;
 u8 lock_type;



 unsigned long contention_point[4];
 unsigned long contending_point[4];

} ;


struct lock_time {
 s64 min;
 s64 max;
 s64 total;
 unsigned long nr;
};

enum bounce_type {
 bounce_acquired_write,
 bounce_acquired_read,
 bounce_contended_write,
 bounce_contended_read,
 nr_bounce_types,

 bounce_acquired = bounce_acquired_write,
 bounce_contended = bounce_contended_write,
};

struct lock_class_stats {
 unsigned long contention_point[4];
 unsigned long contending_point[4];
 struct lock_time read_waittime;
 struct lock_time write_waittime;
 struct lock_time read_holdtime;
 struct lock_time write_holdtime;
 unsigned long bounces[nr_bounce_types];
};

struct lock_class_stats lock_stats(struct lock_class *class);
void clear_lock_stats(struct lock_class *class);






struct lockdep_map {
 struct lock_class_key *key;
 struct lock_class *class_cache[2];
 const char *name;
 u8 wait_type_outer;
 u8 wait_type_inner;
 u8 lock_type;


 int cpu;
 unsigned long ip;

};

struct pin_cookie { unsigned int val; };





struct held_lock {
# 221 "../include/linux/lockdep_types.h"
 u64 prev_chain_key;
 unsigned long acquire_ip;
 struct lockdep_map *instance;
 struct lockdep_map *nest_lock;

 u64 waittime_stamp;
 u64 holdtime_stamp;






 unsigned int class_idx:13;
# 248 "../include/linux/lockdep_types.h"
 unsigned int irq_context:2;
 unsigned int trylock:1;

 unsigned int read:2;
 unsigned int check:1;
 unsigned int hardirqs_off:1;
 unsigned int sync:1;
 unsigned int references:11;
 unsigned int pin_count;
};
# 13 "../include/linux/spinlock_types_raw.h" 2

typedef struct raw_spinlock {
 arch_spinlock_t raw_lock;

 unsigned int magic, owner_cpu;
 void *owner;


 struct lockdep_map dep_map;

} raw_spinlock_t;
# 8 "../include/linux/ratelimit_types.h" 2







struct ratelimit_state {
 raw_spinlock_t lock;

 int interval;
 int burst;
 int printed;
 int missed;
 unsigned long begin;
 unsigned long flags;
};
# 44 "../include/linux/ratelimit_types.h"
extern int ___ratelimit(struct ratelimit_state *rs, const char *func);
# 10 "../include/linux/printk.h" 2


struct console;

extern const char linux_banner[];
extern const char linux_proc_banner[];

extern int oops_in_progress;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int printk_get_level(const char *buffer)
{
 if (buffer[0] == '\001' && buffer[1]) {
  switch (buffer[1]) {
  case '0' ... '7':
  case 'c':
   return buffer[1];
  }
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *printk_skip_level(const char *buffer)
{
 if (printk_get_level(buffer))
  return buffer + 2;

 return buffer;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *printk_skip_headers(const char *buffer)
{
 while (printk_get_level(buffer))
  buffer = printk_skip_level(buffer);

 return buffer;
}
# 65 "../include/linux/printk.h"
int match_devname_and_update_preferred_console(const char *match,
            const char *name,
            const short idx);

extern int console_printk[];






extern void console_verbose(void);



extern char devkmsg_log_str[10];
struct ctl_table;

extern int suppress_printk;

struct va_format {
 const char *fmt;
 va_list *va;
};
# 140 "../include/linux/printk.h"
extern __attribute__((__format__(printf, 1, 2)))
void early_printk(const char *fmt, ...);





struct dev_printk_info;
# 208 "../include/linux/printk.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 0)))
int vprintk(const char *s, va_list args)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 2))) __attribute__((__cold__))
int _printk(const char *s, ...)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 2))) __attribute__((__cold__))
int _printk_deferred(const char *s, ...)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void printk_deferred_enter(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void printk_deferred_exit(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int printk_ratelimit(void)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool printk_timed_ratelimit(unsigned long *caller_jiffies,
       unsigned int interval_msec)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wake_up_klogd(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char *log_buf_addr_get(void)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 log_buf_len_get(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void log_buf_vmcoreinfo_setup(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void setup_log_buf(int early)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 2))) void dump_stack_set_arch_desc(const char *fmt, ...)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dump_stack_print_info(const char *log_lvl)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void show_regs_print_info(const char *log_lvl)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dump_stack_lvl(const char *log_lvl)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dump_stack(void)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void printk_trigger_flush(void)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void console_try_replay_all(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void printk_legacy_allow_panic_sync(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool nbcon_device_try_acquire(struct console *con)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nbcon_device_release(struct console *con)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nbcon_atomic_flush_unsafe(void)
{
}



bool this_cpu_in_panic(void);
# 364 "../include/linux/printk.h"
extern int kptr_restrict;
# 383 "../include/linux/printk.h"
struct module;
# 737 "../include/linux/printk.h"
extern const struct file_operations kmsg_fops;

enum {
 DUMP_PREFIX_NONE,
 DUMP_PREFIX_ADDRESS,
 DUMP_PREFIX_OFFSET
};
extern int hex_dump_to_buffer(const void *buf, size_t len, int rowsize,
         int groupsize, char *linebuf, size_t linebuflen,
         bool ascii);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_hex_dump(const char *level, const char *prefix_str,
      int prefix_type, int rowsize, int groupsize,
      const void *buf, size_t len, bool ascii)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_hex_dump_bytes(const char *prefix_str, int prefix_type,
     const void *buf, size_t len)
{
}
# 776 "../include/linux/printk.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_hex_dump_debug(const char *prefix_str, int prefix_type,
     int rowsize, int groupsize,
     const void *buf, size_t len, bool ascii)
{
}
# 23 "../include/asm-generic/bug.h" 2

struct warn_args;
struct pt_regs;

void __warn(const char *file, int line, void *caller, unsigned taint,
     struct pt_regs *regs, struct warn_args *args);




struct bug_entry {

 unsigned long bug_addr;
# 47 "../include/asm-generic/bug.h"
 unsigned short flags;
};
# 90 "../include/asm-generic/bug.h"
extern __attribute__((__format__(printf, 4, 5)))
void warn_slowpath_fmt(const char *file, const int line, unsigned taint,
         const char *fmt, ...);
extern __attribute__((__format__(printf, 1, 2))) void __warn_printk(const char *fmt, ...);
# 2 "./arch/hexagon/include/generated/asm/bug.h" 2
# 6 "../include/linux/bug.h" 2



enum bug_trap_type {
 BUG_TRAP_TYPE_NONE = 0,
 BUG_TRAP_TYPE_WARN = 1,
 BUG_TRAP_TYPE_BUG = 2,
};

struct pt_regs;
# 34 "../include/linux/bug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_warning_bug(const struct bug_entry *bug)
{
 return bug->flags & (1 << 0);
}

void bug_get_file_line(struct bug_entry *bug, const char **file,
         unsigned int *line);

struct bug_entry *find_bug(unsigned long bugaddr);

enum bug_trap_type report_bug(unsigned long bug_addr, struct pt_regs *regs);


int is_valid_bugaddr(unsigned long addr);

void generic_bug_clear_once(void);
# 80 "../include/linux/bug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool check_data_corruption(bool v) { return v; }
# 14 "../include/linux/xarray.h" 2


# 1 "../include/linux/gfp.h" 1




# 1 "../include/linux/gfp_types.h" 1
# 26 "../include/linux/gfp_types.h"
enum {
 ___GFP_DMA_BIT,
 ___GFP_HIGHMEM_BIT,
 ___GFP_DMA32_BIT,
 ___GFP_MOVABLE_BIT,
 ___GFP_RECLAIMABLE_BIT,
 ___GFP_HIGH_BIT,
 ___GFP_IO_BIT,
 ___GFP_FS_BIT,
 ___GFP_ZERO_BIT,
 ___GFP_UNUSED_BIT,
 ___GFP_DIRECT_RECLAIM_BIT,
 ___GFP_KSWAPD_RECLAIM_BIT,
 ___GFP_WRITE_BIT,
 ___GFP_NOWARN_BIT,
 ___GFP_RETRY_MAYFAIL_BIT,
 ___GFP_NOFAIL_BIT,
 ___GFP_NORETRY_BIT,
 ___GFP_MEMALLOC_BIT,
 ___GFP_COMP_BIT,
 ___GFP_NOMEMALLOC_BIT,
 ___GFP_HARDWALL_BIT,
 ___GFP_THISNODE_BIT,
 ___GFP_ACCOUNT_BIT,
 ___GFP_ZEROTAGS_BIT,





 ___GFP_NOLOCKDEP_BIT,




 ___GFP_LAST_BIT
};
# 6 "../include/linux/gfp.h" 2

# 1 "../include/linux/mmzone.h" 1







# 1 "../include/linux/spinlock.h" 1
# 56 "../include/linux/spinlock.h"
# 1 "../include/linux/preempt.h" 1
# 79 "../include/linux/preempt.h"
# 1 "./arch/hexagon/include/generated/asm/preempt.h" 1
# 1 "../include/asm-generic/preempt.h" 1




# 1 "../include/linux/thread_info.h" 1
# 14 "../include/linux/thread_info.h"
# 1 "../include/linux/restart_block.h" 1
# 11 "../include/linux/restart_block.h"
struct __kernel_timespec;
struct timespec;
struct old_timespec32;
struct pollfd;

enum timespec_type {
 TT_NONE = 0,
 TT_NATIVE = 1,
 TT_COMPAT = 2,
};




struct restart_block {
 unsigned long arch_data;
 long (*fn)(struct restart_block *);
 union {

  struct {
   u32 *uaddr;
   u32 val;
   u32 flags;
   u32 bitset;
   u64 time;
   u32 *uaddr2;
  } futex;

  struct {
   clockid_t clockid;
   enum timespec_type type;
   union {
    struct __kernel_timespec *rmtp;
    struct old_timespec32 *compat_rmtp;
   };
   u64 expires;
  } nanosleep;

  struct {
   struct pollfd *ufds;
   int nfds;
   int has_timeout;
   unsigned long tv_sec;
   unsigned long tv_nsec;
  } poll;
 };
};

extern long do_no_restart_syscall(struct restart_block *parm);
# 15 "../include/linux/thread_info.h" 2
# 33 "../include/linux/thread_info.h"
enum {
 BAD_STACK = -1,
 NOT_STACK = 0,
 GOOD_FRAME,
 GOOD_STACK,
};
# 60 "../include/linux/thread_info.h"
# 1 "../arch/hexagon/include/asm/thread_info.h" 1
# 14 "../arch/hexagon/include/asm/thread_info.h"
# 1 "../arch/hexagon/include/asm/processor.h" 1
# 13 "../arch/hexagon/include/asm/processor.h"
# 1 "../arch/hexagon/include/asm/mem-layout.h" 1
# 29 "../arch/hexagon/include/asm/mem-layout.h"
extern unsigned long __phys_offset;
# 48 "../arch/hexagon/include/asm/mem-layout.h"
enum fixed_addresses {
 FIX_KMAP_BEGIN,
 FIX_KMAP_END,
 __end_of_fixed_addresses
};


extern int max_kernel_seg;
# 14 "../arch/hexagon/include/asm/processor.h" 2
# 1 "../arch/hexagon/include/uapi/asm/registers.h" 1
# 19 "../arch/hexagon/include/uapi/asm/registers.h"
struct hvm_event_record {
 unsigned long vmel;
 unsigned long vmest;
 unsigned long vmpsp;
 unsigned long vmbadva;
};

struct pt_regs {
 long restart_r0;
 long syscall_nr;
 union {
  struct {
   unsigned long usr;
   unsigned long preds;
  };
  long long int predsusr;
 };
 union {
  struct {
   unsigned long m0;
   unsigned long m1;
  };
  long long int m1m0;
 };
 union {
  struct {
   unsigned long sa1;
   unsigned long lc1;
  };
  long long int lc1sa1;
 };
 union {
  struct {
   unsigned long sa0;
   unsigned long lc0;
  };
  long long int lc0sa0;
 };
 union {
  struct {
   unsigned long ugp;
   unsigned long gp;
  };
  long long int gpugp;
 };
 union {
  struct {
   unsigned long cs0;
   unsigned long cs1;
  };
  long long int cs1cs0;
 };






 union {
  struct {
   unsigned long r00;
   unsigned long r01;
  };
  long long int r0100;
 };
 union {
  struct {
   unsigned long r02;
   unsigned long r03;
  };
  long long int r0302;
 };
 union {
  struct {
   unsigned long r04;
   unsigned long r05;
  };
  long long int r0504;
 };
 union {
  struct {
   unsigned long r06;
   unsigned long r07;
  };
  long long int r0706;
 };
 union {
  struct {
   unsigned long r08;
   unsigned long r09;
  };
  long long int r0908;
 };
 union {
        struct {
   unsigned long r10;
   unsigned long r11;
        };
        long long int r1110;
 };
 union {
        struct {
   unsigned long r12;
   unsigned long r13;
        };
        long long int r1312;
 };
 union {
        struct {
   unsigned long r14;
   unsigned long r15;
        };
        long long int r1514;
 };
 union {
  struct {
   unsigned long r16;
   unsigned long r17;
  };
  long long int r1716;
 };
 union {
  struct {
   unsigned long r18;
   unsigned long r19;
  };
  long long int r1918;
 };
 union {
  struct {
   unsigned long r20;
   unsigned long r21;
  };
  long long int r2120;
 };
 union {
  struct {
   unsigned long r22;
   unsigned long r23;
  };
  long long int r2322;
 };
 union {
  struct {
   unsigned long r24;
   unsigned long r25;
  };
  long long int r2524;
 };
 union {
  struct {
   unsigned long r26;
   unsigned long r27;
  };
  long long int r2726;
 };
 union {
  struct {
   unsigned long r28;
   unsigned long r29;
        };
        long long int r2928;
 };
 union {
  struct {
   unsigned long r30;
   unsigned long r31;
  };
  long long int r3130;
 };

 struct hvm_event_record hvmer;
};
# 15 "../arch/hexagon/include/asm/processor.h" 2
# 1 "../arch/hexagon/include/asm/hexagon_vm.h" 1
# 44 "../arch/hexagon/include/asm/hexagon_vm.h"
enum VM_CACHE_OPS {
 hvmc_ickill,
 hvmc_dckill,
 hvmc_l2kill,
 hvmc_dccleaninva,
 hvmc_icinva,
 hvmc_idsync,
 hvmc_fetch_cfg
};

enum VM_INT_OPS {
 hvmi_nop,
 hvmi_globen,
 hvmi_globdis,
 hvmi_locen,
 hvmi_locdis,
 hvmi_affinity,
 hvmi_get,
 hvmi_peek,
 hvmi_status,
 hvmi_post,
 hvmi_clear
};

extern void _K_VM_event_vector(void);

void __vmrte(void);
long __vmsetvec(void *);
long __vmsetie(long);
long __vmgetie(void);
long __vmintop(enum VM_INT_OPS, long, long, long, long);
long __vmclrmap(void *, unsigned long);
long __vmnewmap(void *);
long __vmcache(enum VM_CACHE_OPS op, unsigned long addr, unsigned long len);
unsigned long long __vmgettime(void);
long __vmsettime(unsigned long long);
long __vmstart(void *, void *);
void __vmstop(void);
long __vmwait(void);
void __vmyield(void);
long __vmvpid(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_ickill(void)
{
 return __vmcache(hvmc_ickill, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_dckill(void)
{
 return __vmcache(hvmc_dckill, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_l2kill(void)
{
 return __vmcache(hvmc_l2kill, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_dccleaninva(unsigned long addr, unsigned long len)
{
 return __vmcache(hvmc_dccleaninva, addr, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_icinva(unsigned long addr, unsigned long len)
{
 return __vmcache(hvmc_icinva, addr, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_idsync(unsigned long addr,
        unsigned long len)
{
 return __vmcache(hvmc_idsync, addr, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmcache_fetch_cfg(unsigned long val)
{
 return __vmcache(hvmc_fetch_cfg, val, 0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_nop(void)
{
 return __vmintop(hvmi_nop, 0, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_globen(long i)
{
 return __vmintop(hvmi_globen, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_globdis(long i)
{
 return __vmintop(hvmi_globdis, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_locen(long i)
{
 return __vmintop(hvmi_locen, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_locdis(long i)
{
 return __vmintop(hvmi_locdis, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_affinity(long i, long cpu)
{
 return __vmintop(hvmi_affinity, i, cpu, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_get(void)
{
 return __vmintop(hvmi_get, 0, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_peek(void)
{
 return __vmintop(hvmi_peek, 0, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_status(long i)
{
 return __vmintop(hvmi_status, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_post(long i)
{
 return __vmintop(hvmi_post, i, 0, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __vmintop_clear(long i)
{
 return __vmintop(hvmi_clear, i, 0, 0, 0);
}
# 16 "../arch/hexagon/include/asm/processor.h" 2


struct task_struct;

extern void start_thread(struct pt_regs *, unsigned long, unsigned long);





struct thread_struct {
 void *switch_sp;
};
# 63 "../arch/hexagon/include/asm/processor.h"
extern unsigned long __get_wchan(struct task_struct *p);
# 79 "../arch/hexagon/include/asm/processor.h"
struct hexagon_switch_stack {
 union {
  struct {
   unsigned long r16;
   unsigned long r17;
  };
  unsigned long long r1716;
 };
 union {
  struct {
   unsigned long r18;
   unsigned long r19;
  };
  unsigned long long r1918;
 };
 union {
  struct {
   unsigned long r20;
   unsigned long r21;
  };
  unsigned long long r2120;
 };
 union {
  struct {
   unsigned long r22;
   unsigned long r23;
  };
  unsigned long long r2322;
 };
 union {
  struct {
   unsigned long r24;
   unsigned long r25;
  };
  unsigned long long r2524;
 };
 union {
  struct {
   unsigned long r26;
   unsigned long r27;
  };
  unsigned long long r2726;
 };

 unsigned long fp;
 unsigned long lr;
};
# 15 "../arch/hexagon/include/asm/thread_info.h" 2

# 1 "../arch/hexagon/include/asm/page.h" 1
# 58 "../arch/hexagon/include/asm/page.h"
# 1 "../include/linux/pfn.h" 1
# 13 "../include/linux/pfn.h"
typedef struct {
 u64 val;
} pfn_t;
# 59 "../arch/hexagon/include/asm/page.h" 2






typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pgd; } pgd_t;
typedef struct { unsigned long pgprot; } pgprot_t;
typedef struct page *pgtable_t;
# 89 "../arch/hexagon/include/asm/page.h"
struct page;
# 100 "../arch/hexagon/include/asm/page.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_page(void *page)
{

 asm volatile(
  "	loop0(1f,%1);\n"
  "1:	{ dczeroa(%0);\n"
  "	  %0 = add(%0,#32); }:endloop0\n"
  : "+r" (page)
  : "r" ((1UL << 14)/32)
  : "lc0", "sa0", "memory"
 );
}
# 127 "../arch/hexagon/include/asm/page.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long virt_to_pfn(const void *kaddr)
{
 return ((unsigned long)(kaddr) - (0xc0000000UL) + __phys_offset) >> 14;
}




# 1 "../include/asm-generic/memory_model.h" 1
# 23 "../include/asm-generic/memory_model.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pfn_valid(unsigned long pfn)
{

 extern unsigned long max_mapnr;
 unsigned long pfn_offset = (__phys_offset >> 14);

 return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr;
}
# 136 "../arch/hexagon/include/asm/page.h" 2

# 1 "../include/asm-generic/getorder.h" 1







# 1 "../include/linux/log2.h" 1
# 21 "../include/linux/log2.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((const))
int __ilog2_u32(u32 n)
{
 return fls(n) - 1;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((const))
int __ilog2_u64(u64 n)
{
 return fls64(n) - 1;
}
# 44 "../include/linux/log2.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((const))
bool is_power_of_2(unsigned long n)
{
 return (n != 0 && ((n & (n - 1)) == 0));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((const))
unsigned long __roundup_pow_of_two(unsigned long n)
{
 return 1UL << fls_long(n - 1);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((const))
unsigned long __rounddown_pow_of_two(unsigned long n)
{
 return 1UL << (fls_long(n) - 1);
}
# 198 "../include/linux/log2.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__))
int __order_base_2(unsigned long n)
{
 return n > 1 ? ( __builtin_constant_p(n - 1) ? ((n - 1) < 2 ? 0 : 63 - __builtin_clzll(n - 1)) : (sizeof(n - 1) <= 4) ? __ilog2_u32(n - 1) : __ilog2_u64(n - 1) ) + 1 : 0;
}
# 225 "../include/linux/log2.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((const))
int __bits_per(unsigned long n)
{
 if (n < 2)
  return 1;
 if (is_power_of_2(n))
  return ( __builtin_constant_p(n) ? ( ((n) == 0 || (n) == 1) ? 0 : ( __builtin_constant_p((n) - 1) ? (((n) - 1) < 2 ? 0 : 63 - __builtin_clzll((n) - 1)) : (sizeof((n) - 1) <= 4) ? __ilog2_u32((n) - 1) : __ilog2_u64((n) - 1) ) + 1) : __order_base_2(n) ) + 1;
 return ( __builtin_constant_p(n) ? ( ((n) == 0 || (n) == 1) ? 0 : ( __builtin_constant_p((n) - 1) ? (((n) - 1) < 2 ? 0 : 63 - __builtin_clzll((n) - 1)) : (sizeof((n) - 1) <= 4) ? __ilog2_u32((n) - 1) : __ilog2_u64((n) - 1) ) + 1) : __order_base_2(n) );
}
# 9 "../include/asm-generic/getorder.h" 2
# 29 "../include/asm-generic/getorder.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__const__)) int get_order(unsigned long size)
{
 if (__builtin_constant_p(size)) {
  if (!size)
   return 32 - 14;

  if (size < (1UL << 14))
   return 0;

  return ( __builtin_constant_p((size) - 1) ? (((size) - 1) < 2 ? 0 : 63 - __builtin_clzll((size) - 1)) : (sizeof((size) - 1) <= 4) ? __ilog2_u32((size) - 1) : __ilog2_u64((size) - 1) ) - 14 + 1;
 }

 size--;
 size >>= 14;

 return fls(size);



}
# 138 "../arch/hexagon/include/asm/page.h" 2
# 17 "../arch/hexagon/include/asm/thread_info.h" 2
# 31 "../arch/hexagon/include/asm/thread_info.h"
struct thread_info {
 struct task_struct *task;
 unsigned long flags;
 __u32 cpu;
 int preempt_count;





 struct pt_regs *regs;





 unsigned long sp;
};
# 73 "../arch/hexagon/include/asm/thread_info.h"
register struct thread_info *__current_thread_info asm("r19");
# 61 "../include/linux/thread_info.h" 2







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long set_restart_fn(struct restart_block *restart,
     long (*fn)(struct restart_block *))
{
 restart->fn = fn;
 do { } while (0);
 return -516;
}
# 87 "../include/linux/thread_info.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_ti_thread_flag(struct thread_info *ti, int flag)
{
 set_bit(flag, (unsigned long *)&ti->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_ti_thread_flag(struct thread_info *ti, int flag)
{
 clear_bit(flag, (unsigned long *)&ti->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_ti_thread_flag(struct thread_info *ti, int flag,
      bool value)
{
 if (value)
  set_ti_thread_flag(ti, flag);
 else
  clear_ti_thread_flag(ti, flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_set_ti_thread_flag(struct thread_info *ti, int flag)
{
 return test_and_set_bit(flag, (unsigned long *)&ti->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_clear_ti_thread_flag(struct thread_info *ti, int flag)
{
 return test_and_clear_bit(flag, (unsigned long *)&ti->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_ti_thread_flag(struct thread_info *ti, int flag)
{
 return ((__builtin_constant_p(flag) && __builtin_constant_p((uintptr_t)((unsigned long *)&ti->flags) != (uintptr_t)((void *)0)) && (uintptr_t)((unsigned long *)&ti->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((unsigned long *)&ti->flags))) ? const_test_bit(flag, (unsigned long *)&ti->flags) : arch_test_bit(flag, (unsigned long *)&ti->flags));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long read_ti_thread_flags(struct thread_info *ti)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_6(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ti->flags) == sizeof(char) || sizeof(ti->flags) == sizeof(short) || sizeof(ti->flags) == sizeof(int) || sizeof(ti->flags) == sizeof(long)) || sizeof(ti->flags) == sizeof(long long))) __compiletime_assert_6(); } while (0); (*(const volatile typeof( _Generic((ti->flags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ti->flags))) *)&(ti->flags)); });
}
# 190 "../include/linux/thread_info.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool tif_need_resched(void)
{
 return ((__builtin_constant_p(3) && __builtin_constant_p((uintptr_t)((unsigned long *)(&__current_thread_info->flags)) != (uintptr_t)((void *)0)) && (uintptr_t)((unsigned long *)(&__current_thread_info->flags)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((unsigned long *)(&__current_thread_info->flags)))) ? const_test_bit(3, (unsigned long *)(&__current_thread_info->flags)) : arch_test_bit(3, (unsigned long *)(&__current_thread_info->flags)));

}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_within_stack_frames(const void * const stack,
        const void * const stackend,
        const void *obj, unsigned long len)
{
 return 0;
}



extern void __check_object_size(const void *ptr, unsigned long n,
     bool to_user);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void check_object_size(const void *ptr, unsigned long n,
           bool to_user)
{
 if (!__builtin_constant_p(n))
  __check_object_size(ptr, n, to_user);
}






extern void __attribute__((__error__("copy source size is too small")))
__bad_copy_from(void);
extern void __attribute__((__error__("copy destination size is too small")))
__bad_copy_to(void);

void __copy_overflow(int size, unsigned long count);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_overflow(int size, unsigned long count)
{
 if (1)
  __copy_overflow(size, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) bool
check_copy_size(const void *addr, size_t bytes, bool is_source)
{
 int sz = __builtin_object_size(addr, 0);
 if (__builtin_expect(!!(sz >= 0 && sz < bytes), 0)) {
  if (!__builtin_constant_p(bytes))
   copy_overflow(sz, bytes);
  else if (is_source)
   __bad_copy_from();
  else
   __bad_copy_to();
  return false;
 }
 if (({ bool __ret_do_once = !!(bytes > ((int)(~0U >> 1))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/thread_info.h", 249, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return false;
 check_object_size(addr, bytes, is_source);
 return true;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_setup_new_exec(void) { }


void arch_task_cache_init(void);
void arch_release_task_struct(struct task_struct *tsk);
int arch_dup_task_struct(struct task_struct *dst,
    struct task_struct *src);
# 6 "../include/asm-generic/preempt.h" 2



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int preempt_count(void)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_7(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(__current_thread_info->preempt_count) == sizeof(char) || sizeof(__current_thread_info->preempt_count) == sizeof(short) || sizeof(__current_thread_info->preempt_count) == sizeof(int) || sizeof(__current_thread_info->preempt_count) == sizeof(long)) || sizeof(__current_thread_info->preempt_count) == sizeof(long long))) __compiletime_assert_7(); } while (0); (*(const volatile typeof( _Generic((__current_thread_info->preempt_count), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (__current_thread_info->preempt_count))) *)&(__current_thread_info->preempt_count)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) volatile int *preempt_count_ptr(void)
{
 return &__current_thread_info->preempt_count;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void preempt_count_set(int pc)
{
 *preempt_count_ptr() = pc;
}
# 35 "../include/asm-generic/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void set_preempt_need_resched(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void clear_preempt_need_resched(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool test_preempt_need_resched(void)
{
 return false;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __preempt_count_add(int val)
{
 *preempt_count_ptr() += val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __preempt_count_sub(int val)
{
 *preempt_count_ptr() -= val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __preempt_count_dec_and_test(void)
{





 return !--*preempt_count_ptr() && tif_need_resched();
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool should_resched(int preempt_offset)
{
 return __builtin_expect(!!(preempt_count() == preempt_offset && tif_need_resched()), 0);

}
# 2 "./arch/hexagon/include/generated/asm/preempt.h" 2
# 80 "../include/linux/preempt.h" 2
# 90 "../include/linux/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned char interrupt_context_level(void)
{
 unsigned long pc = preempt_count();
 unsigned char level = 0;

 level += !!(pc & ((((1UL << (4))-1) << (((0 + 8) + 8) + 4))));
 level += !!(pc & ((((1UL << (4))-1) << (((0 + 8) + 8) + 4)) | (((1UL << (4))-1) << ((0 + 8) + 8))));
 level += !!(pc & ((((1UL << (4))-1) << (((0 + 8) + 8) + 4)) | (((1UL << (4))-1) << ((0 + 8) + 8)) | (1UL << (0 + 8))));

 return level;
}
# 433 "../include/linux/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void migrate_disable(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void migrate_enable(void) { }
# 474 "../include/linux/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void preempt_enable_nested(void)
{
 if (0)
  __asm__ __volatile__("": : :"memory");
}

typedef struct { void *lock; ; } class_preempt_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_preempt_destructor(class_preempt_t *_T) { if (_T->lock) { __asm__ __volatile__("": : :"memory"); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_preempt_lock_ptr(class_preempt_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_preempt_t class_preempt_constructor(void) { class_preempt_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; __asm__ __volatile__("": : :"memory"); return _t; }
typedef struct { void *lock; ; } class_preempt_notrace_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_preempt_notrace_destructor(class_preempt_notrace_t *_T) { if (_T->lock) { __asm__ __volatile__("": : :"memory"); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_preempt_notrace_lock_ptr(class_preempt_notrace_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_preempt_notrace_t class_preempt_notrace_constructor(void) { class_preempt_notrace_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; __asm__ __volatile__("": : :"memory"); return _t; }
typedef struct { void *lock; ; } class_migrate_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_migrate_destructor(class_migrate_t *_T) { if (_T->lock) { migrate_enable(); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_migrate_lock_ptr(class_migrate_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_migrate_t class_migrate_constructor(void) { class_migrate_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; migrate_disable(); return _t; }
# 492 "../include/linux/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool preempt_model_none(void)
{
 return 1;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool preempt_model_voluntary(void)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool preempt_model_full(void)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool preempt_model_rt(void)
{
 return 0;
}
# 520 "../include/linux/preempt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool preempt_model_preemptible(void)
{
 return preempt_model_full() || preempt_model_rt();
}
# 57 "../include/linux/spinlock.h" 2


# 1 "../include/linux/irqflags.h" 1
# 15 "../include/linux/irqflags.h"
# 1 "../include/linux/irqflags_types.h" 1







struct irqtrace_events {
 unsigned int irq_events;
 unsigned long hardirq_enable_ip;
 unsigned long hardirq_disable_ip;
 unsigned int hardirq_enable_event;
 unsigned int hardirq_disable_event;
 unsigned long softirq_disable_ip;
 unsigned long softirq_enable_ip;
 unsigned int softirq_disable_event;
 unsigned int softirq_enable_event;
};
# 16 "../include/linux/irqflags.h" 2


# 1 "../arch/hexagon/include/asm/irqflags.h" 1
# 14 "../arch/hexagon/include/asm/irqflags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long arch_local_save_flags(void)
{
 return __vmgetie();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long arch_local_irq_save(void)
{
 return __vmsetie(0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_irqs_disabled_flags(unsigned long flags)
{
 return !flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_irqs_disabled(void)
{
 return !__vmgetie();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_local_irq_enable(void)
{
 __vmsetie(1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_local_irq_disable(void)
{
 __vmsetie(0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_local_irq_restore(unsigned long flags)
{
 __vmsetie(flags);
}
# 19 "../include/linux/irqflags.h" 2
# 1 "./arch/hexagon/include/generated/asm/percpu.h" 1
# 1 "../include/asm-generic/percpu.h" 1





# 1 "../include/linux/threads.h" 1
# 7 "../include/asm-generic/percpu.h" 2
# 1 "../include/linux/percpu-defs.h" 1
# 308 "../include/linux/percpu-defs.h"
extern void __bad_size_call_parameter(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __this_cpu_preempt_check(const char *op) { }
# 8 "../include/asm-generic/percpu.h" 2
# 2 "./arch/hexagon/include/generated/asm/percpu.h" 2
# 20 "../include/linux/irqflags.h" 2



  extern void lockdep_softirqs_on(unsigned long ip);
  extern void lockdep_softirqs_off(unsigned long ip);
  extern void lockdep_hardirqs_on_prepare(void);
  extern void lockdep_hardirqs_on(unsigned long ip);
  extern void lockdep_hardirqs_off(unsigned long ip);
# 38 "../include/linux/irqflags.h"
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_hardirqs_enabled; extern __attribute__((section(".data" ""))) __typeof__(int) hardirqs_enabled;
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_hardirq_context; extern __attribute__((section(".data" ""))) __typeof__(int) hardirq_context;

extern void trace_hardirqs_on_prepare(void);
extern void trace_hardirqs_off_finish(void);
extern void trace_hardirqs_on(void);
extern void trace_hardirqs_off(void);
# 149 "../include/linux/irqflags.h"
extern void warn_bogus_irq_restore(void);
# 259 "../include/linux/irqflags.h"
typedef struct { void *lock; ; } class_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_irq_destructor(class_irq_t *_T) { if (_T->lock) { do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_irq_lock_ptr(class_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_irq_t class_irq_constructor(void) { class_irq_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0); return _t; }
typedef struct { void *lock; unsigned long flags; } class_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_irqsave_destructor(class_irqsave_t *_T) { if (_T->lock) { do { if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(_T->flags); } while (0); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_irqsave_lock_ptr(class_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_irqsave_t class_irqsave_constructor(void) { class_irqsave_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; do { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_off(); } while (0); return _t; }
# 60 "../include/linux/spinlock.h" 2


# 1 "../include/linux/bottom_half.h" 1




# 1 "../include/linux/instruction_pointer.h" 1
# 6 "../include/linux/bottom_half.h" 2



extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
# 18 "../include/linux/bottom_half.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_bh_disable(void)
{
 __local_bh_disable_ip(({ __label__ __here; __here: (unsigned long)&&__here; }), (2 * (1UL << (0 + 8))));
}

extern void _local_bh_enable(void);
extern void __local_bh_enable_ip(unsigned long ip, unsigned int cnt);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_bh_enable_ip(unsigned long ip)
{
 __local_bh_enable_ip(ip, (2 * (1UL << (0 + 8))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_bh_enable(void)
{
 __local_bh_enable_ip(({ __label__ __here; __here: (unsigned long)&&__here; }), (2 * (1UL << (0 + 8))));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool local_bh_blocked(void) { return false; }
# 63 "../include/linux/spinlock.h" 2
# 1 "../include/linux/lockdep.h" 1
# 14 "../include/linux/lockdep.h"
# 1 "../include/linux/smp.h" 1
# 12 "../include/linux/smp.h"
# 1 "../include/linux/list.h" 1




# 1 "../include/linux/container_of.h" 1
# 6 "../include/linux/list.h" 2


# 1 "../include/linux/poison.h" 1
# 9 "../include/linux/list.h" 2


# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 12 "../include/linux/list.h" 2
# 35 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void INIT_LIST_HEAD(struct list_head *list)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_8(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->next) == sizeof(char) || sizeof(list->next) == sizeof(short) || sizeof(list->next) == sizeof(int) || sizeof(list->next) == sizeof(long)) || sizeof(list->next) == sizeof(long long))) __compiletime_assert_8(); } while (0); do { *(volatile typeof(list->next) *)&(list->next) = (list); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_9(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->prev) == sizeof(char) || sizeof(list->prev) == sizeof(short) || sizeof(list->prev) == sizeof(int) || sizeof(list->prev) == sizeof(long)) || sizeof(list->prev) == sizeof(long long))) __compiletime_assert_9(); } while (0); do { *(volatile typeof(list->prev) *)&(list->prev) = (list); } while (0); } while (0);
}
# 128 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __list_add_valid(struct list_head *new,
    struct list_head *prev,
    struct list_head *next)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __list_del_entry_valid(struct list_head *entry)
{
 return true;
}
# 146 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_add(struct list_head *new,
         struct list_head *prev,
         struct list_head *next)
{
 if (!__list_add_valid(new, prev, next))
  return;

 next->prev = new;
 new->next = next;
 new->prev = prev;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_10(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(prev->next) == sizeof(char) || sizeof(prev->next) == sizeof(short) || sizeof(prev->next) == sizeof(int) || sizeof(prev->next) == sizeof(long)) || sizeof(prev->next) == sizeof(long long))) __compiletime_assert_10(); } while (0); do { *(volatile typeof(prev->next) *)&(prev->next) = (new); } while (0); } while (0);
}
# 167 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_add(struct list_head *new, struct list_head *head)
{
 __list_add(new, head, head->next);
}
# 181 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_add_tail(struct list_head *new, struct list_head *head)
{
 __list_add(new, head->prev, head);
}
# 193 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_del(struct list_head * prev, struct list_head * next)
{
 next->prev = prev;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_11(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(prev->next) == sizeof(char) || sizeof(prev->next) == sizeof(short) || sizeof(prev->next) == sizeof(int) || sizeof(prev->next) == sizeof(long)) || sizeof(prev->next) == sizeof(long long))) __compiletime_assert_11(); } while (0); do { *(volatile typeof(prev->next) *)&(prev->next) = (next); } while (0); } while (0);
}
# 207 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_del_clearprev(struct list_head *entry)
{
 __list_del(entry->prev, entry->next);
 entry->prev = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_del_entry(struct list_head *entry)
{
 if (!__list_del_entry_valid(entry))
  return;

 __list_del(entry->prev, entry->next);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_del(struct list_head *entry)
{
 __list_del_entry(entry);
 entry->next = ((void *) 0x100 + 0);
 entry->prev = ((void *) 0x122 + 0);
}
# 241 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_replace(struct list_head *old,
    struct list_head *new)
{
 new->next = old->next;
 new->next->prev = new;
 new->prev = old->prev;
 new->prev->next = new;
}
# 257 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_replace_init(struct list_head *old,
         struct list_head *new)
{
 list_replace(old, new);
 INIT_LIST_HEAD(old);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_swap(struct list_head *entry1,
        struct list_head *entry2)
{
 struct list_head *pos = entry2->prev;

 list_del(entry2);
 list_replace(entry1, entry2);
 if (pos == entry1)
  pos = entry2;
 list_add(entry1, pos);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_del_init(struct list_head *entry)
{
 __list_del_entry(entry);
 INIT_LIST_HEAD(entry);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_move(struct list_head *list, struct list_head *head)
{
 __list_del_entry(list);
 list_add(list, head);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_move_tail(struct list_head *list,
      struct list_head *head)
{
 __list_del_entry(list);
 list_add_tail(list, head);
}
# 323 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_bulk_move_tail(struct list_head *head,
           struct list_head *first,
           struct list_head *last)
{
 first->prev->next = last->next;
 last->next->prev = first->prev;

 head->prev->next = first;
 first->prev = head->prev;

 last->next = head;
 head->prev = last;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_is_first(const struct list_head *list, const struct list_head *head)
{
 return list->prev == head;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_is_last(const struct list_head *list, const struct list_head *head)
{
 return list->next == head;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_is_head(const struct list_head *list, const struct list_head *head)
{
 return list == head;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_empty(const struct list_head *head)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_12(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(head->next) == sizeof(char) || sizeof(head->next) == sizeof(short) || sizeof(head->next) == sizeof(int) || sizeof(head->next) == sizeof(long)) || sizeof(head->next) == sizeof(long long))) __compiletime_assert_12(); } while (0); (*(const volatile typeof( _Generic((head->next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (head->next))) *)&(head->next)); }) == head;
}
# 387 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_del_init_careful(struct list_head *entry)
{
 __list_del_entry(entry);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_13(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(entry->prev) == sizeof(char) || sizeof(entry->prev) == sizeof(short) || sizeof(entry->prev) == sizeof(int) || sizeof(entry->prev) == sizeof(long)) || sizeof(entry->prev) == sizeof(long long))) __compiletime_assert_13(); } while (0); do { *(volatile typeof(entry->prev) *)&(entry->prev) = (entry); } while (0); } while (0);
 do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_14(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&entry->next) == sizeof(char) || sizeof(*&entry->next) == sizeof(short) || sizeof(*&entry->next) == sizeof(int) || sizeof(*&entry->next) == sizeof(long)) || sizeof(*&entry->next) == sizeof(long long))) __compiletime_assert_14(); } while (0); do { *(volatile typeof(*&entry->next) *)&(*&entry->next) = (entry); } while (0); } while (0); } while (0);
}
# 407 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_empty_careful(const struct list_head *head)
{
 struct list_head *next = ({ typeof( _Generic((*&head->next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&head->next))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_15(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&head->next) == sizeof(char) || sizeof(*&head->next) == sizeof(short) || sizeof(*&head->next) == sizeof(int) || sizeof(*&head->next) == sizeof(long)) || sizeof(*&head->next) == sizeof(long long))) __compiletime_assert_15(); } while (0); (*(const volatile typeof( _Generic((*&head->next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&head->next))) *)&(*&head->next)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&head->next))___p1; });
 return list_is_head(next, head) && (next == ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_16(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(head->prev) == sizeof(char) || sizeof(head->prev) == sizeof(short) || sizeof(head->prev) == sizeof(int) || sizeof(head->prev) == sizeof(long)) || sizeof(head->prev) == sizeof(long long))) __compiletime_assert_16(); } while (0); (*(const volatile typeof( _Generic((head->prev), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (head->prev))) *)&(head->prev)); }));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_rotate_left(struct list_head *head)
{
 struct list_head *first;

 if (!list_empty(head)) {
  first = head->next;
  list_move_tail(first, head);
 }
}
# 434 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_rotate_to_front(struct list_head *list,
     struct list_head *head)
{





 list_move_tail(head, list);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int list_is_singular(const struct list_head *head)
{
 return !list_empty(head) && (head->next == head->prev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_cut_position(struct list_head *list,
  struct list_head *head, struct list_head *entry)
{
 struct list_head *new_first = entry->next;
 list->next = head->next;
 list->next->prev = list;
 list->prev = entry;
 entry->next = list;
 head->next = new_first;
 new_first->prev = head;
}
# 480 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_cut_position(struct list_head *list,
  struct list_head *head, struct list_head *entry)
{
 if (list_empty(head))
  return;
 if (list_is_singular(head) && !list_is_head(entry, head) && (entry != head->next))
  return;
 if (list_is_head(entry, head))
  INIT_LIST_HEAD(list);
 else
  __list_cut_position(list, head, entry);
}
# 507 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_cut_before(struct list_head *list,
       struct list_head *head,
       struct list_head *entry)
{
 if (head->next == entry) {
  INIT_LIST_HEAD(list);
  return;
 }
 list->next = head->next;
 list->next->prev = list;
 list->prev = entry->prev;
 list->prev->next = list;
 head->next = entry;
 entry->prev = head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_splice(const struct list_head *list,
     struct list_head *prev,
     struct list_head *next)
{
 struct list_head *first = list->next;
 struct list_head *last = list->prev;

 first->prev = prev;
 prev->next = first;

 last->next = next;
 next->prev = last;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice(const struct list_head *list,
    struct list_head *head)
{
 if (!list_empty(list))
  __list_splice(list, head, head->next);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice_tail(struct list_head *list,
    struct list_head *head)
{
 if (!list_empty(list))
  __list_splice(list, head->prev, head);
}
# 568 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice_init(struct list_head *list,
        struct list_head *head)
{
 if (!list_empty(list)) {
  __list_splice(list, head, head->next);
  INIT_LIST_HEAD(list);
 }
}
# 585 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice_tail_init(struct list_head *list,
      struct list_head *head)
{
 if (!list_empty(list)) {
  __list_splice(list, head->prev, head);
  INIT_LIST_HEAD(list);
 }
}
# 751 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t list_count_nodes(struct list_head *head)
{
 struct list_head *pos;
 size_t count = 0;

 for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next)
  count++;

 return count;
}
# 942 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void INIT_HLIST_NODE(struct hlist_node *h)
{
 h->next = ((void *)0);
 h->pprev = ((void *)0);
}
# 956 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_unhashed(const struct hlist_node *h)
{
 return !h->pprev;
}
# 969 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_unhashed_lockless(const struct hlist_node *h)
{
 return !({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_17(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->pprev) == sizeof(char) || sizeof(h->pprev) == sizeof(short) || sizeof(h->pprev) == sizeof(int) || sizeof(h->pprev) == sizeof(long)) || sizeof(h->pprev) == sizeof(long long))) __compiletime_assert_17(); } while (0); (*(const volatile typeof( _Generic((h->pprev), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (h->pprev))) *)&(h->pprev)); });
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_empty(const struct hlist_head *h)
{
 return !({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_18(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->first) == sizeof(char) || sizeof(h->first) == sizeof(short) || sizeof(h->first) == sizeof(int) || sizeof(h->first) == sizeof(long)) || sizeof(h->first) == sizeof(long long))) __compiletime_assert_18(); } while (0); (*(const volatile typeof( _Generic((h->first), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (h->first))) *)&(h->first)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __hlist_del(struct hlist_node *n)
{
 struct hlist_node *next = n->next;
 struct hlist_node **pprev = n->pprev;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_19(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pprev) == sizeof(char) || sizeof(*pprev) == sizeof(short) || sizeof(*pprev) == sizeof(int) || sizeof(*pprev) == sizeof(long)) || sizeof(*pprev) == sizeof(long long))) __compiletime_assert_19(); } while (0); do { *(volatile typeof(*pprev) *)&(*pprev) = (next); } while (0); } while (0);
 if (next)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_20(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->pprev) == sizeof(char) || sizeof(next->pprev) == sizeof(short) || sizeof(next->pprev) == sizeof(int) || sizeof(next->pprev) == sizeof(long)) || sizeof(next->pprev) == sizeof(long long))) __compiletime_assert_20(); } while (0); do { *(volatile typeof(next->pprev) *)&(next->pprev) = (pprev); } while (0); } while (0);
}
# 1000 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_del(struct hlist_node *n)
{
 __hlist_del(n);
 n->next = ((void *) 0x100 + 0);
 n->pprev = ((void *) 0x122 + 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_del_init(struct hlist_node *n)
{
 if (!hlist_unhashed(n)) {
  __hlist_del(n);
  INIT_HLIST_NODE(n);
 }
}
# 1029 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
{
 struct hlist_node *first = h->first;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_21(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next) == sizeof(char) || sizeof(n->next) == sizeof(short) || sizeof(n->next) == sizeof(int) || sizeof(n->next) == sizeof(long)) || sizeof(n->next) == sizeof(long long))) __compiletime_assert_21(); } while (0); do { *(volatile typeof(n->next) *)&(n->next) = (first); } while (0); } while (0);
 if (first)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_22(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(first->pprev) == sizeof(char) || sizeof(first->pprev) == sizeof(short) || sizeof(first->pprev) == sizeof(int) || sizeof(first->pprev) == sizeof(long)) || sizeof(first->pprev) == sizeof(long long))) __compiletime_assert_22(); } while (0); do { *(volatile typeof(first->pprev) *)&(first->pprev) = (&n->next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_23(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->first) == sizeof(char) || sizeof(h->first) == sizeof(short) || sizeof(h->first) == sizeof(int) || sizeof(h->first) == sizeof(long)) || sizeof(h->first) == sizeof(long long))) __compiletime_assert_23(); } while (0); do { *(volatile typeof(h->first) *)&(h->first) = (n); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_24(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_24(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&h->first); } while (0); } while (0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_before(struct hlist_node *n,
        struct hlist_node *next)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_25(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_25(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (next->pprev); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_26(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next) == sizeof(char) || sizeof(n->next) == sizeof(short) || sizeof(n->next) == sizeof(int) || sizeof(n->next) == sizeof(long)) || sizeof(n->next) == sizeof(long long))) __compiletime_assert_26(); } while (0); do { *(volatile typeof(n->next) *)&(n->next) = (next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_27(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->pprev) == sizeof(char) || sizeof(next->pprev) == sizeof(short) || sizeof(next->pprev) == sizeof(int) || sizeof(next->pprev) == sizeof(long)) || sizeof(next->pprev) == sizeof(long long))) __compiletime_assert_27(); } while (0); do { *(volatile typeof(next->pprev) *)&(next->pprev) = (&n->next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_28(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*(n->pprev)) == sizeof(char) || sizeof(*(n->pprev)) == sizeof(short) || sizeof(*(n->pprev)) == sizeof(int) || sizeof(*(n->pprev)) == sizeof(long)) || sizeof(*(n->pprev)) == sizeof(long long))) __compiletime_assert_28(); } while (0); do { *(volatile typeof(*(n->pprev)) *)&(*(n->pprev)) = (n); } while (0); } while (0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_behind(struct hlist_node *n,
        struct hlist_node *prev)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_29(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next) == sizeof(char) || sizeof(n->next) == sizeof(short) || sizeof(n->next) == sizeof(int) || sizeof(n->next) == sizeof(long)) || sizeof(n->next) == sizeof(long long))) __compiletime_assert_29(); } while (0); do { *(volatile typeof(n->next) *)&(n->next) = (prev->next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_30(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(prev->next) == sizeof(char) || sizeof(prev->next) == sizeof(short) || sizeof(prev->next) == sizeof(int) || sizeof(prev->next) == sizeof(long)) || sizeof(prev->next) == sizeof(long long))) __compiletime_assert_30(); } while (0); do { *(volatile typeof(prev->next) *)&(prev->next) = (n); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_31(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_31(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&prev->next); } while (0); } while (0);

 if (n->next)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_32(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next->pprev) == sizeof(char) || sizeof(n->next->pprev) == sizeof(short) || sizeof(n->next->pprev) == sizeof(int) || sizeof(n->next->pprev) == sizeof(long)) || sizeof(n->next->pprev) == sizeof(long long))) __compiletime_assert_32(); } while (0); do { *(volatile typeof(n->next->pprev) *)&(n->next->pprev) = (&n->next); } while (0); } while (0);
}
# 1077 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_fake(struct hlist_node *n)
{
 n->pprev = &n->next;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hlist_fake(struct hlist_node *h)
{
 return h->pprev == &h->next;
}
# 1099 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h)
{
 return !n->next && n->pprev == &h->first;
}
# 1113 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_move_list(struct hlist_head *old,
       struct hlist_head *new)
{
 new->first = old->first;
 if (new->first)
  new->first->pprev = &new->first;
 old->first = ((void *)0);
}
# 1130 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_splice_init(struct hlist_head *from,
         struct hlist_node *last,
         struct hlist_head *to)
{
 if (to->first)
  to->first->pprev = &last->next;
 last->next = to->first;
 to->first = from->first;
 from->first->pprev = &to->first;
 from->first = ((void *)0);
}
# 1202 "../include/linux/list.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t hlist_count_nodes(struct hlist_head *head)
{
 struct hlist_node *pos;
 size_t count = 0;

 for (pos = (head)->first; pos ; pos = pos->next)
  count++;

 return count;
}
# 13 "../include/linux/smp.h" 2
# 1 "../include/linux/cpumask.h" 1
# 11 "../include/linux/cpumask.h"
# 1 "../include/linux/kernel.h" 1
# 24 "../include/linux/kernel.h"
# 1 "../include/linux/hex.h" 1






extern const char hex_asc[];



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char *hex_byte_pack(char *buf, u8 byte)
{
 *buf++ = hex_asc[((byte) & 0xf0) >> 4];
 *buf++ = hex_asc[((byte) & 0x0f)];
 return buf;
}

extern const char hex_asc_upper[];



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char *hex_byte_pack_upper(char *buf, u8 byte)
{
 *buf++ = hex_asc_upper[((byte) & 0xf0) >> 4];
 *buf++ = hex_asc_upper[((byte) & 0x0f)];
 return buf;
}

extern int hex_to_bin(unsigned char ch);
extern int __attribute__((__warn_unused_result__)) hex2bin(u8 *dst, const char *src, size_t count);
extern char *bin2hex(char *dst, const void *src, size_t count);

bool mac_pton(const char *s, u8 *mac);
# 25 "../include/linux/kernel.h" 2
# 1 "../include/linux/kstrtox.h" 1








int __attribute__((__warn_unused_result__)) _kstrtoul(const char *s, unsigned int base, unsigned long *res);
int __attribute__((__warn_unused_result__)) _kstrtol(const char *s, unsigned int base, long *res);

int __attribute__((__warn_unused_result__)) kstrtoull(const char *s, unsigned int base, unsigned long long *res);
int __attribute__((__warn_unused_result__)) kstrtoll(const char *s, unsigned int base, long long *res);
# 30 "../include/linux/kstrtox.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtoul(const char *s, unsigned int base, unsigned long *res)
{




 if (sizeof(unsigned long) == sizeof(unsigned long long) &&
     __alignof__(unsigned long) == __alignof__(unsigned long long))
  return kstrtoull(s, base, (unsigned long long *)res);
 else
  return _kstrtoul(s, base, res);
}
# 58 "../include/linux/kstrtox.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtol(const char *s, unsigned int base, long *res)
{




 if (sizeof(long) == sizeof(long long) &&
     __alignof__(long) == __alignof__(long long))
  return kstrtoll(s, base, (long long *)res);
 else
  return _kstrtol(s, base, res);
}

int __attribute__((__warn_unused_result__)) kstrtouint(const char *s, unsigned int base, unsigned int *res);
int __attribute__((__warn_unused_result__)) kstrtoint(const char *s, unsigned int base, int *res);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtou64(const char *s, unsigned int base, u64 *res)
{
 return kstrtoull(s, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtos64(const char *s, unsigned int base, s64 *res)
{
 return kstrtoll(s, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtou32(const char *s, unsigned int base, u32 *res)
{
 return kstrtouint(s, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtos32(const char *s, unsigned int base, s32 *res)
{
 return kstrtoint(s, base, res);
}

int __attribute__((__warn_unused_result__)) kstrtou16(const char *s, unsigned int base, u16 *res);
int __attribute__((__warn_unused_result__)) kstrtos16(const char *s, unsigned int base, s16 *res);
int __attribute__((__warn_unused_result__)) kstrtou8(const char *s, unsigned int base, u8 *res);
int __attribute__((__warn_unused_result__)) kstrtos8(const char *s, unsigned int base, s8 *res);
int __attribute__((__warn_unused_result__)) kstrtobool(const char *s, bool *res);

int __attribute__((__warn_unused_result__)) kstrtoull_from_user(const char *s, size_t count, unsigned int base, unsigned long long *res);
int __attribute__((__warn_unused_result__)) kstrtoll_from_user(const char *s, size_t count, unsigned int base, long long *res);
int __attribute__((__warn_unused_result__)) kstrtoul_from_user(const char *s, size_t count, unsigned int base, unsigned long *res);
int __attribute__((__warn_unused_result__)) kstrtol_from_user(const char *s, size_t count, unsigned int base, long *res);
int __attribute__((__warn_unused_result__)) kstrtouint_from_user(const char *s, size_t count, unsigned int base, unsigned int *res);
int __attribute__((__warn_unused_result__)) kstrtoint_from_user(const char *s, size_t count, unsigned int base, int *res);
int __attribute__((__warn_unused_result__)) kstrtou16_from_user(const char *s, size_t count, unsigned int base, u16 *res);
int __attribute__((__warn_unused_result__)) kstrtos16_from_user(const char *s, size_t count, unsigned int base, s16 *res);
int __attribute__((__warn_unused_result__)) kstrtou8_from_user(const char *s, size_t count, unsigned int base, u8 *res);
int __attribute__((__warn_unused_result__)) kstrtos8_from_user(const char *s, size_t count, unsigned int base, s8 *res);
int __attribute__((__warn_unused_result__)) kstrtobool_from_user(const char *s, size_t count, bool *res);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtou64_from_user(const char *s, size_t count, unsigned int base, u64 *res)
{
 return kstrtoull_from_user(s, count, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtos64_from_user(const char *s, size_t count, unsigned int base, s64 *res)
{
 return kstrtoll_from_user(s, count, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtou32_from_user(const char *s, size_t count, unsigned int base, u32 *res)
{
 return kstrtouint_from_user(s, count, base, res);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kstrtos32_from_user(const char *s, size_t count, unsigned int base, s32 *res)
{
 return kstrtoint_from_user(s, count, base, res);
}
# 145 "../include/linux/kstrtox.h"
extern unsigned long simple_strtoul(const char *,char **,unsigned int);
extern long simple_strtol(const char *,char **,unsigned int);
extern unsigned long long simple_strtoull(const char *,char **,unsigned int);
extern long long simple_strtoll(const char *,char **,unsigned int);
# 26 "../include/linux/kernel.h" 2

# 1 "../include/linux/math.h" 1





# 1 "./arch/hexagon/include/generated/asm/div64.h" 1
# 1 "../include/asm-generic/div64.h" 1
# 171 "../include/asm-generic/div64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uint64_t __arch_xprod_64(const uint64_t m, uint64_t n, bool bias)
{
 uint32_t m_lo = m;
 uint32_t m_hi = m >> 32;
 uint32_t n_lo = n;
 uint32_t n_hi = n >> 32;
 uint64_t res;
 uint32_t res_lo, res_hi, tmp;

 if (!bias) {
  res = ((uint64_t)m_lo * n_lo) >> 32;
 } else if (!(m & ((1ULL << 63) | (1ULL << 31)))) {

  res = (m + (uint64_t)m_lo * n_lo) >> 32;
 } else {
  res = m + (uint64_t)m_lo * n_lo;
  res_lo = res >> 32;
  res_hi = (res_lo < m_hi);
  res = res_lo | ((uint64_t)res_hi << 32);
 }

 if (!(m & ((1ULL << 63) | (1ULL << 31)))) {

  res += (uint64_t)m_lo * n_hi;
  res += (uint64_t)m_hi * n_lo;
  res >>= 32;
 } else {
  res += (uint64_t)m_lo * n_hi;
  tmp = res >> 32;
  res += (uint64_t)m_hi * n_lo;
  res_lo = res >> 32;
  res_hi = (res_lo < tmp);
  res = res_lo | ((uint64_t)res_hi << 32);
 }

 res += (uint64_t)m_hi * n_hi;

 return res;
}



extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor);
# 2 "./arch/hexagon/include/generated/asm/div64.h" 2
# 7 "../include/linux/math.h" 2
# 115 "../include/linux/math.h"
struct s8_fract { __s8 numerator; __s8 denominator; };
struct u8_fract { __u8 numerator; __u8 denominator; };
struct s16_fract { __s16 numerator; __s16 denominator; };
struct u16_fract { __u16 numerator; __u16 denominator; };
struct s32_fract { __s32 numerator; __s32 denominator; };
struct u32_fract { __u32 numerator; __u32 denominator; };
# 193 "../include/linux/math.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 reciprocal_scale(u32 val, u32 ep_ro)
{
 return (u32)(((u64) val * ep_ro) >> 32);
}

u64 int_pow(u64 base, unsigned int exp);
unsigned long int_sqrt(unsigned long);


u32 int_sqrt64(u64 x);
# 28 "../include/linux/kernel.h" 2
# 1 "../include/linux/minmax.h" 1
# 242 "../include/linux/minmax.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool in_range64(u64 val, u64 start, u64 len)
{
 return (val - start) < len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool in_range32(u32 val, u32 start, u32 len)
{
 return (val - start) < len;
}
# 29 "../include/linux/kernel.h" 2




# 1 "../include/linux/sprintf.h" 1







int num_to_str(char *buf, int size, unsigned long long num, unsigned int width);

__attribute__((__format__(printf, 2, 3))) int sprintf(char *buf, const char * fmt, ...);
__attribute__((__format__(printf, 2, 0))) int vsprintf(char *buf, const char *, va_list);
__attribute__((__format__(printf, 3, 4))) int snprintf(char *buf, size_t size, const char *fmt, ...);
__attribute__((__format__(printf, 3, 0))) int vsnprintf(char *buf, size_t size, const char *fmt, va_list args);
__attribute__((__format__(printf, 3, 4))) int scnprintf(char *buf, size_t size, const char *fmt, ...);
__attribute__((__format__(printf, 3, 0))) int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
__attribute__((__format__(printf, 2, 3))) __attribute__((__malloc__)) char *kasprintf(gfp_t gfp, const char *fmt, ...);
__attribute__((__format__(printf, 2, 0))) __attribute__((__malloc__)) char *kvasprintf(gfp_t gfp, const char *fmt, va_list args);
__attribute__((__format__(printf, 2, 0))) const char *kvasprintf_const(gfp_t gfp, const char *fmt, va_list args);

__attribute__((__format__(scanf, 2, 3))) int sscanf(const char *, const char *, ...);
__attribute__((__format__(scanf, 2, 0))) int vsscanf(const char *, const char *, va_list);


extern bool no_hash_pointers;
int no_hash_pointers_enable(char *str);
# 34 "../include/linux/kernel.h" 2
# 1 "../include/linux/static_call_types.h" 1
# 32 "../include/linux/static_call_types.h"
struct static_call_site {
 s32 addr;
 s32 key;
};
# 94 "../include/linux/static_call_types.h"
struct static_call_key {
 void *func;
};
# 35 "../include/linux/kernel.h" 2

# 1 "../include/linux/wordpart.h" 1
# 37 "../include/linux/kernel.h" 2
# 57 "../include/linux/kernel.h"
struct completion;
struct user;
# 145 "../include/linux/kernel.h"
  static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __might_resched(const char *file, int line,
         unsigned int offsets) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __might_sleep(const char *file, int line) { }
# 161 "../include/linux/kernel.h"
void __might_fault(const char *file, int line);




void do_exit(long error_code) __attribute__((__noreturn__));

extern int core_kernel_text(unsigned long addr);
extern int __kernel_text_address(unsigned long addr);
extern int kernel_text_address(unsigned long addr);
extern int func_ptr_is_kernel_text(void *ptr);

extern void bust_spinlocks(int yes);

extern int root_mountflags;

extern bool early_boot_irqs_disabled;





extern enum system_states {
 SYSTEM_BOOTING,
 SYSTEM_SCHEDULING,
 SYSTEM_FREEING_INITMEM,
 SYSTEM_RUNNING,
 SYSTEM_HALT,
 SYSTEM_POWER_OFF,
 SYSTEM_RESTART,
 SYSTEM_SUSPEND,
} system_state;
# 214 "../include/linux/kernel.h"
enum ftrace_dump_mode {
 DUMP_NONE,
 DUMP_ALL,
 DUMP_ORIG,
 DUMP_PARAM,
};


void tracing_on(void);
void tracing_off(void);
int tracing_is_on(void);
void tracing_snapshot(void);
void tracing_snapshot_alloc(void);

extern void tracing_start(void);
extern void tracing_stop(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 2)))
void ____trace_printk_check_format(const char *fmt, ...)
{
}
# 294 "../include/linux/kernel.h"
extern __attribute__((__format__(printf, 2, 3)))
int __trace_bprintk(unsigned long ip, const char *fmt, ...);

extern __attribute__((__format__(printf, 2, 3)))
int __trace_printk(unsigned long ip, const char *fmt, ...);
# 335 "../include/linux/kernel.h"
extern int __trace_bputs(unsigned long ip, const char *str);
extern int __trace_puts(unsigned long ip, const char *str, int size);

extern void trace_dump_stack(int skip);
# 357 "../include/linux/kernel.h"
extern __attribute__((__format__(printf, 2, 0))) int
__ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap);

extern __attribute__((__format__(printf, 2, 0))) int
__ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap);

extern void ftrace_dump(enum ftrace_dump_mode oops_dump_mode);
# 12 "../include/linux/cpumask.h" 2

# 1 "../include/linux/cpumask_types.h" 1








typedef struct cpumask { unsigned long bits[(((1) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))]; } cpumask_t;
# 63 "../include/linux/cpumask_types.h"
typedef struct cpumask cpumask_var_t[1];
# 14 "../include/linux/cpumask.h" 2



# 1 "../include/linux/numa.h" 1
# 18 "../include/linux/numa.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool numa_valid_node(int nid)
{
 return nid >= 0 && nid < (1 << 0);
}
# 47 "../include/linux/numa.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int numa_nearest_node(int node, unsigned int state)
{
 return (-1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int memory_add_physaddr_to_nid(u64 start)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int phys_to_target_node(u64 start)
{
 return 0;
}
# 18 "../include/linux/cpumask.h" 2
# 33 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_nr_cpu_ids(unsigned int nr)
{

 ({ int __ret_warn_on = !!(nr != ((unsigned int)1)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/cpumask.h", 36, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });



}
# 115 "../include/linux/cpumask.h"
extern struct cpumask __cpu_possible_mask;
extern struct cpumask __cpu_online_mask;
extern struct cpumask __cpu_enabled_mask;
extern struct cpumask __cpu_present_mask;
extern struct cpumask __cpu_active_mask;
extern struct cpumask __cpu_dying_mask;







extern atomic_t __num_online_cpus;

extern cpumask_t cpus_booted_once_mask;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void cpu_max_bits_warn(unsigned int cpu, unsigned int bits)
{



}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned int cpumask_check(unsigned int cpu)
{
 cpu_max_bits_warn(cpu, ((unsigned int)1));
 return cpu;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_first(const struct cpumask *srcp)
{
 return find_first_bit(((srcp)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_first_zero(const struct cpumask *srcp)
{
 return find_first_zero_bit(((srcp)->bits), ((unsigned int)1));
}
# 175 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_first_and(const struct cpumask *srcp1, const struct cpumask *srcp2)
{
 return find_first_and_bit(((srcp1)->bits), ((srcp2)->bits), ((unsigned int)1));
}
# 189 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_first_and_and(const struct cpumask *srcp1,
       const struct cpumask *srcp2,
       const struct cpumask *srcp3)
{
 return find_first_and_and_bit(((srcp1)->bits), ((srcp2)->bits),
          ((srcp3)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_last(const struct cpumask *srcp)
{
 return find_last_bit(((srcp)->bits), ((unsigned int)1));
}
# 216 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_next(int n, const struct cpumask *srcp)
{

 if (n != -1)
  cpumask_check(n);
 return find_next_bit(((srcp)->bits), ((unsigned int)1), n + 1);
}
# 232 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
{

 if (n != -1)
  cpumask_check(n);
 return find_next_zero_bit(((srcp)->bits), ((unsigned int)1), n+1);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_local_spread(unsigned int i, int node)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_any_and_distribute(const struct cpumask *src1p,
            const struct cpumask *src2p)
{
 return cpumask_first_and(src1p, src2p);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_any_distribute(const struct cpumask *srcp)
{
 return cpumask_first(srcp);
}
# 272 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_next_and(int n, const struct cpumask *src1p,
       const struct cpumask *src2p)
{

 if (n != -1)
  cpumask_check(n);
 return find_next_and_bit(((src1p)->bits), ((src2p)->bits),
  ((unsigned int)1), n + 1);
}
# 294 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap)
{
 cpumask_check(start);
 if (n != -1)
  cpumask_check(n);





 if (wrap && n >= 0)
  return ((unsigned int)1);

 return cpumask_first(mask);
}
# 397 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
{
 unsigned int i;

 cpumask_check(cpu);
 for ((i) = 0; (i) = find_next_bit((((mask)->bits)), (((unsigned int)1)), (i)), (i) < (((unsigned int)1)); (i)++)
  if (i != cpu)
   break;
 return i;
}
# 417 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_any_and_but(const struct cpumask *mask1,
     const struct cpumask *mask2,
     unsigned int cpu)
{
 unsigned int i;

 cpumask_check(cpu);
 i = cpumask_first_and(mask1, mask2);
 if (i != cpu)
  return i;

 return cpumask_next_and(cpu, mask1, mask2);
}
# 439 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_nth(unsigned int cpu, const struct cpumask *srcp)
{
 return find_nth_bit(((srcp)->bits), ((unsigned int)1), cpumask_check(cpu));
}
# 452 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_nth_and(unsigned int cpu, const struct cpumask *srcp1,
       const struct cpumask *srcp2)
{
 return find_nth_and_bit(((srcp1)->bits), ((srcp2)->bits),
    ((unsigned int)1), cpumask_check(cpu));
}
# 468 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned int cpumask_nth_andnot(unsigned int cpu, const struct cpumask *srcp1,
       const struct cpumask *srcp2)
{
 return find_nth_andnot_bit(((srcp1)->bits), ((srcp2)->bits),
    ((unsigned int)1), cpumask_check(cpu));
}
# 485 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned int cpumask_nth_and_andnot(unsigned int cpu, const struct cpumask *srcp1,
       const struct cpumask *srcp2,
       const struct cpumask *srcp3)
{
 return find_nth_and_andnot_bit(((srcp1)->bits),
     ((srcp2)->bits),
     ((srcp3)->bits),
     ((unsigned int)1), cpumask_check(cpu));
}
# 511 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)
{
 set_bit(cpumask_check(cpu), ((dstp)->bits));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)
{
 ((__builtin_constant_p(cpumask_check(cpu)) && __builtin_constant_p((uintptr_t)(((dstp)->bits)) != (uintptr_t)((void *)0)) && (uintptr_t)(((dstp)->bits)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(((dstp)->bits)))) ? generic___set_bit(cpumask_check(cpu), ((dstp)->bits)) : arch___set_bit(cpumask_check(cpu), ((dstp)->bits)));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
{
 clear_bit(cpumask_check(cpu), ((dstp)->bits));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __cpumask_clear_cpu(int cpu, struct cpumask *dstp)
{
 ((__builtin_constant_p(cpumask_check(cpu)) && __builtin_constant_p((uintptr_t)(((dstp)->bits)) != (uintptr_t)((void *)0)) && (uintptr_t)(((dstp)->bits)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(((dstp)->bits)))) ? generic___clear_bit(cpumask_check(cpu), ((dstp)->bits)) : arch___clear_bit(cpumask_check(cpu), ((dstp)->bits)));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value)
{
 ((value) ? set_bit((cpumask_check(cpu)), (((dstp)->bits))) : clear_bit((cpumask_check(cpu)), (((dstp)->bits))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value)
{
 ((value) ? ((__builtin_constant_p((cpumask_check(cpu))) && __builtin_constant_p((uintptr_t)((((dstp)->bits))) != (uintptr_t)((void *)0)) && (uintptr_t)((((dstp)->bits))) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((((dstp)->bits))))) ? generic___set_bit((cpumask_check(cpu)), (((dstp)->bits))) : arch___set_bit((cpumask_check(cpu)), (((dstp)->bits)))) : ((__builtin_constant_p((cpumask_check(cpu))) && __builtin_constant_p((uintptr_t)((((dstp)->bits))) != (uintptr_t)((void *)0)) && (uintptr_t)((((dstp)->bits))) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((((dstp)->bits))))) ? generic___clear_bit((cpumask_check(cpu)), (((dstp)->bits))) : arch___clear_bit((cpumask_check(cpu)), (((dstp)->bits)))));
}
# 560 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool cpumask_test_cpu(int cpu, const struct cpumask *cpumask)
{
 return ((__builtin_constant_p(cpumask_check(cpu)) && __builtin_constant_p((uintptr_t)((((cpumask))->bits)) != (uintptr_t)((void *)0)) && (uintptr_t)((((cpumask))->bits)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)((((cpumask))->bits)))) ? const_test_bit(cpumask_check(cpu), (((cpumask))->bits)) : arch_test_bit(cpumask_check(cpu), (((cpumask))->bits)));
}
# 574 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool cpumask_test_and_set_cpu(int cpu, struct cpumask *cpumask)
{
 return test_and_set_bit(cpumask_check(cpu), ((cpumask)->bits));
}
# 588 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool cpumask_test_and_clear_cpu(int cpu, struct cpumask *cpumask)
{
 return test_and_clear_bit(cpumask_check(cpu), ((cpumask)->bits));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_setall(struct cpumask *dstp)
{
 if ((__builtin_constant_p(((unsigned int)1)) && (((unsigned int)1)) <= 32 && (((unsigned int)1)) > 0)) {
  ((dstp)->bits)[0] = (~0UL >> (-(((unsigned int)1)) & (32 - 1)));
  return;
 }
 bitmap_fill(((dstp)->bits), ((unsigned int)1));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_clear(struct cpumask *dstp)
{
 bitmap_zero(((dstp)->bits), ((unsigned int)1));
}
# 623 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_and(struct cpumask *dstp,
          const struct cpumask *src1p,
          const struct cpumask *src2p)
{
 return bitmap_and(((dstp)->bits), ((src1p)->bits),
           ((src2p)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
         const struct cpumask *src2p)
{
 bitmap_or(((dstp)->bits), ((src1p)->bits),
          ((src2p)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_xor(struct cpumask *dstp,
          const struct cpumask *src1p,
          const struct cpumask *src2p)
{
 bitmap_xor(((dstp)->bits), ((src1p)->bits),
           ((src2p)->bits), ((unsigned int)1));
}
# 666 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_andnot(struct cpumask *dstp,
      const struct cpumask *src1p,
      const struct cpumask *src2p)
{
 return bitmap_andnot(((dstp)->bits), ((src1p)->bits),
       ((src2p)->bits), ((unsigned int)1));
}
# 681 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_equal(const struct cpumask *src1p,
    const struct cpumask *src2p)
{
 return bitmap_equal(((src1p)->bits), ((src2p)->bits),
       ((unsigned int)1));
}
# 697 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_or_equal(const struct cpumask *src1p,
        const struct cpumask *src2p,
        const struct cpumask *src3p)
{
 return bitmap_or_equal(((src1p)->bits), ((src2p)->bits),
          ((src3p)->bits), ((unsigned int)1));
}
# 713 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_intersects(const struct cpumask *src1p,
         const struct cpumask *src2p)
{
 return bitmap_intersects(((src1p)->bits), ((src2p)->bits),
            ((unsigned int)1));
}
# 727 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_subset(const struct cpumask *src1p,
     const struct cpumask *src2p)
{
 return bitmap_subset(((src1p)->bits), ((src2p)->bits),
        ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_empty(const struct cpumask *srcp)
{
 return bitmap_empty(((srcp)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_full(const struct cpumask *srcp)
{
 return bitmap_full(((srcp)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_weight(const struct cpumask *srcp)
{
 return bitmap_weight(((srcp)->bits), ((unsigned int)1));
}
# 774 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_weight_and(const struct cpumask *srcp1,
      const struct cpumask *srcp2)
{
 return bitmap_weight_and(((srcp1)->bits), ((srcp2)->bits), ((unsigned int)1));
}
# 787 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_weight_andnot(const struct cpumask *srcp1,
      const struct cpumask *srcp2)
{
 return bitmap_weight_andnot(((srcp1)->bits), ((srcp2)->bits), ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_shift_right(struct cpumask *dstp,
           const struct cpumask *srcp, int n)
{
 bitmap_shift_right(((dstp)->bits), ((srcp)->bits), n,
            ((unsigned int)1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_shift_left(struct cpumask *dstp,
          const struct cpumask *srcp, int n)
{
 bitmap_shift_left(((dstp)->bits), ((srcp)->bits), n,
           ((unsigned int)1));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpumask_copy(struct cpumask *dstp,
    const struct cpumask *srcp)
{
 bitmap_copy(((dstp)->bits), ((srcp)->bits), ((unsigned int)1));
}
# 861 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpumask_parse_user(const char *buf, int len,
         struct cpumask *dstp)
{
 return bitmap_parse_user(buf, len, ((dstp)->bits), ((unsigned int)1));
}
# 875 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpumask_parselist_user(const char *buf, int len,
         struct cpumask *dstp)
{
 return bitmap_parselist_user(buf, len, ((dstp)->bits),
         ((unsigned int)1));
}
# 889 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpumask_parse(const char *buf, struct cpumask *dstp)
{
 return bitmap_parse(buf, (~0U), ((dstp)->bits), ((unsigned int)1));
}
# 901 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpulist_parse(const char *buf, struct cpumask *dstp)
{
 return bitmap_parselist(buf, ((dstp)->bits), ((unsigned int)1));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int cpumask_size(void)
{
 return (((((((unsigned int)1))) + ((__typeof__((((unsigned int)1))))((32)) - 1)) & ~((__typeof__((((unsigned int)1))))((32)) - 1)) / 8);
}
# 967 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags,
       int node)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
{
 cpumask_clear(*mask);
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags,
       int node)
{
 cpumask_clear(*mask);
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void alloc_bootmem_cpumask_var(cpumask_var_t *mask)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void free_cpumask_var(cpumask_var_t mask)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void free_bootmem_cpumask_var(cpumask_var_t mask)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpumask_available(cpumask_var_t mask)
{
 return true;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_free_cpumask_var(void *p) { struct cpumask * _T = *(struct cpumask * *)p; if (_T) free_cpumask_var(_T); };



extern const unsigned long cpu_all_bits[(((1) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
# 1032 "../include/linux/cpumask.h"
void init_cpu_present(const struct cpumask *src);
void init_cpu_possible(const struct cpumask *src);
void init_cpu_online(const struct cpumask *src);
# 1045 "../include/linux/cpumask.h"
void set_cpu_online(unsigned int cpu, bool online);
# 1061 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __check_is_bitmap(const unsigned long *bitmap)
{
 return 1;
}
# 1073 "../include/linux/cpumask.h"
extern const unsigned long
 cpu_bit_bitmap[32 +1][(((1) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cpumask *get_cpu_mask(unsigned int cpu)
{
 const unsigned long *p = cpu_bit_bitmap[1 + cpu % 32];
 p -= cpu / 32;
 return ((struct cpumask *)(1 ? (p) : (void *)sizeof(__check_is_bitmap(p))));
}
# 1141 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_online(unsigned int cpu)
{
 return cpu == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_possible(unsigned int cpu)
{
 return cpu == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_enabled(unsigned int cpu)
{
 return cpu == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_present(unsigned int cpu)
{
 return cpu == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_active(unsigned int cpu)
{
 return cpu == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_dying(unsigned int cpu)
{
 return false;
}
# 1200 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ssize_t
cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
{
 return bitmap_print_to_pagebuf(list, buf, ((mask)->bits),
          ((unsigned int)1));
}
# 1223 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ssize_t
cpumap_print_bitmask_to_buf(char *buf, const struct cpumask *mask,
  loff_t off, size_t count)
{
 return bitmap_print_bitmask_to_buf(buf, ((mask)->bits),
       ((unsigned int)1), off, count) - 1;
}
# 1245 "../include/linux/cpumask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ssize_t
cpumap_print_list_to_buf(char *buf, const struct cpumask *mask,
  loff_t off, size_t count)
{
 return bitmap_print_list_to_buf(buf, ((mask)->bits),
       ((unsigned int)1), off, count) - 1;
}
# 14 "../include/linux/smp.h" 2

# 1 "../include/linux/smp_types.h" 1




# 1 "../include/linux/llist.h" 1
# 56 "../include/linux/llist.h"
struct llist_head {
 struct llist_node *first;
};

struct llist_node {
 struct llist_node *next;
};
# 71 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_llist_head(struct llist_head *list)
{
 list->first = ((void *)0);
}
# 84 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_llist_node(struct llist_node *node)
{
 node->next = node;
}
# 98 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool llist_on_list(const struct llist_node *node)
{
 return node->next != node;
}
# 216 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool llist_empty(const struct llist_head *head)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_33(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(head->first) == sizeof(char) || sizeof(head->first) == sizeof(short) || sizeof(head->first) == sizeof(int) || sizeof(head->first) == sizeof(long)) || sizeof(head->first) == sizeof(long long))) __compiletime_assert_33(); } while (0); (*(const volatile typeof( _Generic((head->first), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (head->first))) *)&(head->first)); }) == ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct llist_node *llist_next(struct llist_node *node)
{
 return node->next;
}

extern bool llist_add_batch(struct llist_node *new_first,
       struct llist_node *new_last,
       struct llist_head *head);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __llist_add_batch(struct llist_node *new_first,
         struct llist_node *new_last,
         struct llist_head *head)
{
 new_last->next = head->first;
 head->first = new_first;
 return new_last->next == ((void *)0);
}
# 246 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool llist_add(struct llist_node *new, struct llist_head *head)
{
 return llist_add_batch(new, new, head);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __llist_add(struct llist_node *new, struct llist_head *head)
{
 return __llist_add_batch(new, new, head);
}
# 264 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct llist_node *llist_del_all(struct llist_head *head)
{
 return ({ typeof(&head->first) __ai_ptr = (&head->first); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)(((void *)0)), (__ai_ptr), sizeof(*(__ai_ptr)))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct llist_node *__llist_del_all(struct llist_head *head)
{
 struct llist_node *first = head->first;

 head->first = ((void *)0);
 return first;
}

extern struct llist_node *llist_del_first(struct llist_head *head);
# 286 "../include/linux/llist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct llist_node *llist_del_first_init(struct llist_head *head)
{
 struct llist_node *n = llist_del_first(head);

 if (n)
  init_llist_node(n);
 return n;
}

extern bool llist_del_first_this(struct llist_head *head,
     struct llist_node *this);

struct llist_node *llist_reverse_order(struct llist_node *head);
# 6 "../include/linux/smp_types.h" 2

enum {
 CSD_FLAG_LOCK = 0x01,

 IRQ_WORK_PENDING = 0x01,
 IRQ_WORK_BUSY = 0x02,
 IRQ_WORK_LAZY = 0x04,
 IRQ_WORK_HARD_IRQ = 0x08,

 IRQ_WORK_CLAIMED = (IRQ_WORK_PENDING | IRQ_WORK_BUSY),

 CSD_TYPE_ASYNC = 0x00,
 CSD_TYPE_SYNC = 0x10,
 CSD_TYPE_IRQ_WORK = 0x20,
 CSD_TYPE_TTWU = 0x30,

 CSD_FLAG_TYPE_MASK = 0xF0,
};
# 58 "../include/linux/smp_types.h"
struct __call_single_node {
 struct llist_node llist;
 union {
  unsigned int u_flags;
  atomic_t a_flags;
 };



};
# 16 "../include/linux/smp.h" 2

typedef void (*smp_call_func_t)(void *info);
typedef bool (*smp_cond_func_t)(int cpu, void *info);




struct __call_single_data {
 struct __call_single_node node;
 smp_call_func_t func;
 void *info;
};





typedef struct __call_single_data call_single_data_t
 __attribute__((__aligned__(sizeof(struct __call_single_data))));
# 45 "../include/linux/smp.h"
extern void __smp_call_single_queue(int cpu, struct llist_node *node);


extern unsigned int total_cpus;

int smp_call_function_single(int cpuid, smp_call_func_t func, void *info,
        int wait);

void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
      void *info, bool wait, const struct cpumask *mask);

int smp_call_function_single_async(int cpu, call_single_data_t *csd);





void __attribute__((__noreturn__)) panic_smp_self_stop(void);
void __attribute__((__noreturn__)) nmi_panic_self_stop(struct pt_regs *regs);
void crash_smp_send_stop(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void on_each_cpu(smp_call_func_t func, void *info, int wait)
{
 on_each_cpu_cond_mask(((void *)0), func, info, wait, ((const struct cpumask *)&__cpu_online_mask));
}
# 90 "../include/linux/smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void on_each_cpu_mask(const struct cpumask *mask,
        smp_call_func_t func, void *info, bool wait)
{
 on_each_cpu_cond_mask(((void *)0), func, info, wait, mask);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void on_each_cpu_cond(smp_cond_func_t cond_func,
        smp_call_func_t func, void *info, bool wait)
{
 on_each_cpu_cond_mask(cond_func, func, info, wait, ((const struct cpumask *)&__cpu_online_mask));
}





void smp_prepare_boot_cpu(void);
# 193 "../include/linux/smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_send_stop(void) { }





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void up_smp_call_function(smp_call_func_t func, void *info)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_send_reschedule(int cpu) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void call_function_init(void) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
smp_call_function_any(const struct cpumask *mask, smp_call_func_t func,
        void *info, int wait)
{
 return smp_call_function_single(0, func, info, wait);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kick_all_cpus_sync(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wake_up_all_idle_cpus(void) { }







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_init(void) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_boot_cpu_id(void)
{
 return 0;
}
# 282 "../include/linux/smp.h"
extern void arch_disable_smp_support(void);

extern void arch_thaw_secondary_cpus_begin(void);
extern void arch_thaw_secondary_cpus_end(void);

void smp_setup_processor_id(void);

int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par,
      bool phys);


int smpcfd_prepare_cpu(unsigned int cpu);
int smpcfd_dead_cpu(unsigned int cpu);
int smpcfd_dying_cpu(unsigned int cpu);
# 15 "../include/linux/lockdep.h" 2
# 1 "./arch/hexagon/include/generated/asm/percpu.h" 1
# 16 "../include/linux/lockdep.h" 2

struct task_struct;





# 1 "../include/linux/debug_locks.h" 1





# 1 "../include/linux/cache.h" 1





# 1 "../arch/hexagon/include/asm/cache.h" 1
# 7 "../include/linux/cache.h" 2
# 7 "../include/linux/debug_locks.h" 2

struct task_struct;

extern int debug_locks ;
extern int debug_locks_silent ;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int __debug_locks_off(void)
{
 return ({ typeof(&debug_locks) __ai_ptr = (&debug_locks); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)(0), (__ai_ptr), sizeof(*(__ai_ptr)))); });
}




extern int debug_locks_off(void);
# 51 "../include/linux/debug_locks.h"
extern void debug_show_all_locks(void);
extern void debug_show_held_locks(struct task_struct *task);
extern void debug_check_no_locks_freed(const void *from, unsigned long len);
extern void debug_check_no_locks_held(void);
# 24 "../include/linux/lockdep.h" 2
# 1 "../include/linux/stacktrace.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/errno.h" 1
# 7 "../include/linux/stacktrace.h" 2

struct task_struct;
struct pt_regs;
# 66 "../include/linux/stacktrace.h"
void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
         int spaces);
int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
   unsigned int nr_entries, int spaces);
unsigned int stack_trace_save(unsigned long *store, unsigned int size,
         unsigned int skipnr);
unsigned int stack_trace_save_tsk(struct task_struct *task,
      unsigned long *store, unsigned int size,
      unsigned int skipnr);
unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
       unsigned int size, unsigned int skipnr);
unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);



struct stack_trace {
 unsigned int nr_entries, max_entries;
 unsigned long *entries;
 unsigned int skip;
};

extern void save_stack_trace(struct stack_trace *trace);
extern void save_stack_trace_regs(struct pt_regs *regs,
      struct stack_trace *trace);
extern void save_stack_trace_tsk(struct task_struct *tsk,
    struct stack_trace *trace);
extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
      struct stack_trace *trace);
extern void save_stack_trace_user(struct stack_trace *trace);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int stack_trace_save_tsk_reliable(struct task_struct *tsk,
      unsigned long *store,
      unsigned int size)
{
 return -38;
}
# 25 "../include/linux/lockdep.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lockdep_copy_map(struct lockdep_map *to,
        struct lockdep_map *from)
{
 int i;

 *to = *from;
# 40 "../include/linux/lockdep.h"
 for (i = 0; i < 2; i++)
  to->class_cache[i] = ((void *)0);
}





struct lock_list {
 struct list_head entry;
 struct lock_class *class;
 struct lock_class *links_to;
 const struct lock_trace *trace;
 u16 distance;

 u8 dep;

 u8 only_xr;





 struct lock_list *parent;
};
# 75 "../include/linux/lockdep.h"
struct lock_chain {

 unsigned int irq_context : 2,
     depth : 6,
     base : 24;

 struct hlist_node entry;
 u64 chain_key;
};




extern void lockdep_init(void);
extern void lockdep_reset(void);
extern void lockdep_reset_lock(struct lockdep_map *lock);
extern void lockdep_free_key_range(void *start, unsigned long size);
extern void lockdep_sys_exit(void);
extern void lockdep_set_selftest_task(struct task_struct *task);

extern void lockdep_init_task(struct task_struct *task);
# 119 "../include/linux/lockdep.h"
extern void lockdep_register_key(struct lock_class_key *key);
extern void lockdep_unregister_key(struct lock_class_key *key);







extern void lockdep_init_map_type(struct lockdep_map *lock, const char *name,
 struct lock_class_key *key, int subclass, u8 inner, u8 outer, u8 lock_type);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
         struct lock_class_key *key, int subclass, u8 inner, u8 outer)
{
 lockdep_init_map_type(lock, name, key, subclass, inner, outer, LD_LOCK_NORMAL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
lockdep_init_map_wait(struct lockdep_map *lock, const char *name,
        struct lock_class_key *key, int subclass, u8 inner)
{
 lockdep_init_map_waits(lock, name, key, subclass, inner, LD_WAIT_INV);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lockdep_init_map(struct lockdep_map *lock, const char *name,
        struct lock_class_key *key, int subclass)
{
 lockdep_init_map_wait(lock, name, key, subclass, LD_WAIT_INV);
}
# 207 "../include/linux/lockdep.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int lockdep_match_key(struct lockdep_map *lock,
        struct lock_class_key *key)
{
 return lock->key == key;
}
# 227 "../include/linux/lockdep.h"
extern void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
    int trylock, int read, int check,
    struct lockdep_map *nest_lock, unsigned long ip);

extern void lock_release(struct lockdep_map *lock, unsigned long ip);

extern void lock_sync(struct lockdep_map *lock, unsigned int subclass,
        int read, int check, struct lockdep_map *nest_lock,
        unsigned long ip);
# 245 "../include/linux/lockdep.h"
extern int lock_is_held_type(const struct lockdep_map *lock, int read);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int lock_is_held(const struct lockdep_map *lock)
{
 return lock_is_held_type(lock, -1);
}




extern void lock_set_class(struct lockdep_map *lock, const char *name,
      struct lock_class_key *key, unsigned int subclass,
      unsigned long ip);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lock_set_subclass(struct lockdep_map *lock,
  unsigned int subclass, unsigned long ip)
{
 lock_set_class(lock, lock->name, lock->key, subclass, ip);
}

extern void lock_downgrade(struct lockdep_map *lock, unsigned long ip);



extern struct pin_cookie lock_pin_lock(struct lockdep_map *lock);
extern void lock_repin_lock(struct lockdep_map *lock, struct pin_cookie);
extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
# 413 "../include/linux/lockdep.h"
void lockdep_set_lock_cmp_fn(struct lockdep_map *, lock_cmp_fn, lock_print_fn);






enum xhlock_context_t {
 XHLOCK_HARD,
 XHLOCK_SOFT,
 XHLOCK_CTX_NR,
};
# 433 "../include/linux/lockdep.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lockdep_invariant_state(bool force) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lockdep_free_task(struct task_struct *task) {}



extern void lock_contended(struct lockdep_map *lock, unsigned long ip);
extern void lock_acquired(struct lockdep_map *lock, unsigned long ip);
# 476 "../include/linux/lockdep.h"
extern void print_irqtrace_events(struct task_struct *curr);
# 491 "../include/linux/lockdep.h"
extern bool read_lock_is_recursive(void);
# 569 "../include/linux/lockdep.h"
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_hardirqs_enabled; extern __attribute__((section(".data" ""))) __typeof__(int) hardirqs_enabled;
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_hardirq_context; extern __attribute__((section(".data" ""))) __typeof__(int) hardirq_context;
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_lockdep_recursion; extern __attribute__((section(".data" ""))) __typeof__(unsigned int) lockdep_recursion;
# 622 "../include/linux/lockdep.h"
extern void lockdep_assert_in_softirq_func(void);
# 656 "../include/linux/lockdep.h"
void lockdep_rcu_suspicious(const char *file, const int line, const char *s);
# 64 "../include/linux/spinlock.h" 2

# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 66 "../include/linux/spinlock.h" 2
# 1 "./arch/hexagon/include/generated/asm/mmiowb.h" 1
# 1 "../include/asm-generic/mmiowb.h" 1
# 2 "./arch/hexagon/include/generated/asm/mmiowb.h" 2
# 67 "../include/linux/spinlock.h" 2
# 89 "../include/linux/spinlock.h"
# 1 "../include/linux/spinlock_types.h" 1
# 17 "../include/linux/spinlock_types.h"
typedef struct spinlock {
 union {
  struct raw_spinlock rlock;



  struct {
   u8 __padding[(__builtin_offsetof(struct raw_spinlock, dep_map))];
   struct lockdep_map dep_map;
  };

 };
} spinlock_t;
# 74 "../include/linux/spinlock_types.h"
# 1 "../include/linux/rwlock_types.h" 1
# 25 "../include/linux/rwlock_types.h"
typedef struct {
 arch_rwlock_t raw_lock;

 unsigned int magic, owner_cpu;
 void *owner;


 struct lockdep_map dep_map;

} rwlock_t;
# 75 "../include/linux/spinlock_types.h" 2
# 90 "../include/linux/spinlock.h" 2







# 1 "../include/linux/spinlock_up.h" 1








# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 10 "../include/linux/spinlock_up.h" 2
# 29 "../include/linux/spinlock_up.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_spin_lock(arch_spinlock_t *lock)
{
 lock->slock = 0;
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_spin_trylock(arch_spinlock_t *lock)
{
 char oldval = lock->slock;

 lock->slock = 0;
 __asm__ __volatile__("": : :"memory");

 return oldval > 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_spin_unlock(arch_spinlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 lock->slock = 1;
}
# 98 "../include/linux/spinlock.h" 2



  extern void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
       struct lock_class_key *key, short inner);
# 180 "../include/linux/spinlock.h"
 extern void do_raw_spin_lock(raw_spinlock_t *lock) ;
 extern int do_raw_spin_trylock(raw_spinlock_t *lock);
 extern void do_raw_spin_unlock(raw_spinlock_t *lock) ;
# 305 "../include/linux/spinlock.h"
# 1 "../include/linux/rwlock.h" 1
# 18 "../include/linux/rwlock.h"
  extern void __rwlock_init(rwlock_t *lock, const char *name,
       struct lock_class_key *key);
# 32 "../include/linux/rwlock.h"
 extern void do_raw_read_lock(rwlock_t *lock) ;
 extern int do_raw_read_trylock(rwlock_t *lock);
 extern void do_raw_read_unlock(rwlock_t *lock) ;
 extern void do_raw_write_lock(rwlock_t *lock) ;
 extern int do_raw_write_trylock(rwlock_t *lock);
 extern void do_raw_write_unlock(rwlock_t *lock) ;
# 306 "../include/linux/spinlock.h" 2






# 1 "../include/linux/spinlock_api_smp.h" 1
# 18 "../include/linux/spinlock_api_smp.h"
int in_lock_functions(unsigned long addr);



void __attribute__((__section__(".spinlock.text"))) _raw_spin_lock(raw_spinlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
                        ;
void __attribute__((__section__(".spinlock.text")))
_raw_spin_lock_nest_lock(raw_spinlock_t *lock, struct lockdep_map *map)
                        ;
void __attribute__((__section__(".spinlock.text"))) _raw_spin_lock_bh(raw_spinlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_spin_lock_irq(raw_spinlock_t *lock)
                        ;

unsigned long __attribute__((__section__(".spinlock.text"))) _raw_spin_lock_irqsave(raw_spinlock_t *lock)
                        ;
unsigned long __attribute__((__section__(".spinlock.text")))
_raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass)
                        ;
int __attribute__((__section__(".spinlock.text"))) _raw_spin_trylock(raw_spinlock_t *lock);
int __attribute__((__section__(".spinlock.text"))) _raw_spin_trylock_bh(raw_spinlock_t *lock);
void __attribute__((__section__(".spinlock.text"))) _raw_spin_unlock(raw_spinlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_spin_unlock_bh(raw_spinlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_spin_unlock_irq(raw_spinlock_t *lock) ;
void __attribute__((__section__(".spinlock.text")))
_raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags)
                        ;
# 86 "../include/linux/spinlock_api_smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __raw_spin_trylock(raw_spinlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 if (do_raw_spin_trylock(lock)) {
  lock_acquire(&lock->dep_map, 0, 1, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
  return 1;
 }
 __asm__ __volatile__("": : :"memory");
 return 0;
}
# 104 "../include/linux/spinlock_api_smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_spin_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_spin_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_lock_irq(raw_spinlock_t *lock)
{
 do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_spin_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_spin_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_lock_bh(raw_spinlock_t *lock)
{
 __local_bh_disable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_spin_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_spin_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_lock(raw_spinlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_spin_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_spin_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_unlock(raw_spinlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_spin_unlock(lock);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock,
         unsigned long flags)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_spin_unlock(lock);
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_unlock_irq(raw_spinlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_spin_unlock(lock);
 do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_spin_unlock_bh(raw_spinlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_spin_unlock(lock);
 __local_bh_enable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __raw_spin_trylock_bh(raw_spinlock_t *lock)
{
 __local_bh_disable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
 if (do_raw_spin_trylock(lock)) {
  lock_acquire(&lock->dep_map, 0, 1, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
  return 1;
 }
 __local_bh_enable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
 return 0;
}



# 1 "../include/linux/rwlock_api_smp.h" 1
# 18 "../include/linux/rwlock_api_smp.h"
void __attribute__((__section__(".spinlock.text"))) _raw_read_lock(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_lock(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_lock_nested(rwlock_t *lock, int subclass) ;
void __attribute__((__section__(".spinlock.text"))) _raw_read_lock_bh(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_lock_bh(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_read_lock_irq(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_lock_irq(rwlock_t *lock) ;
unsigned long __attribute__((__section__(".spinlock.text"))) _raw_read_lock_irqsave(rwlock_t *lock)
                       ;
unsigned long __attribute__((__section__(".spinlock.text"))) _raw_write_lock_irqsave(rwlock_t *lock)
                       ;
int __attribute__((__section__(".spinlock.text"))) _raw_read_trylock(rwlock_t *lock);
int __attribute__((__section__(".spinlock.text"))) _raw_write_trylock(rwlock_t *lock);
void __attribute__((__section__(".spinlock.text"))) _raw_read_unlock(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_unlock(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_read_unlock_bh(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_unlock_bh(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_read_unlock_irq(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text"))) _raw_write_unlock_irq(rwlock_t *lock) ;
void __attribute__((__section__(".spinlock.text")))
_raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
                       ;
void __attribute__((__section__(".spinlock.text")))
_raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
                       ;
# 118 "../include/linux/rwlock_api_smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __raw_read_trylock(rwlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 if (do_raw_read_trylock(lock)) {
  do { if (read_lock_is_recursive()) lock_acquire(&lock->dep_map, 0, 1, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); else lock_acquire(&lock->dep_map, 0, 1, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); } while (0);
  return 1;
 }
 __asm__ __volatile__("": : :"memory");
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __raw_write_trylock(rwlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 if (do_raw_write_trylock(lock)) {
  lock_acquire(&lock->dep_map, 0, 1, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
  return 1;
 }
 __asm__ __volatile__("": : :"memory");
 return 0;
}
# 147 "../include/linux/rwlock_api_smp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_lock(rwlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 do { if (read_lock_is_recursive()) lock_acquire(&lock->dep_map, 0, 0, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); else lock_acquire(&lock->dep_map, 0, 0, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); } while (0);
 do { if (!do_raw_read_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_read_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __raw_read_lock_irqsave(rwlock_t *lock)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 do { if (read_lock_is_recursive()) lock_acquire(&lock->dep_map, 0, 0, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); else lock_acquire(&lock->dep_map, 0, 0, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); } while (0);
 do { if (!do_raw_read_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_read_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_lock_irq(rwlock_t *lock)
{
 do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 do { if (read_lock_is_recursive()) lock_acquire(&lock->dep_map, 0, 0, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); else lock_acquire(&lock->dep_map, 0, 0, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); } while (0);
 do { if (!do_raw_read_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_read_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_lock_bh(rwlock_t *lock)
{
 __local_bh_disable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
 do { if (read_lock_is_recursive()) lock_acquire(&lock->dep_map, 0, 0, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); else lock_acquire(&lock->dep_map, 0, 0, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0)); } while (0);
 do { if (!do_raw_read_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_read_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_write_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_write_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_lock_irq(rwlock_t *lock)
{
 do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0);
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_write_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_write_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_lock_bh(rwlock_t *lock)
{
 __local_bh_disable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_write_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_write_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_lock(rwlock_t *lock)
{
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_write_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_write_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_lock_nested(rwlock_t *lock, int subclass)
{
 __asm__ __volatile__("": : :"memory");
 lock_acquire(&lock->dep_map, subclass, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do { if (!do_raw_write_trylock(lock)) { lock_contended(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); do_raw_write_lock(lock); } lock_acquired(&(lock)->dep_map, (unsigned long)__builtin_return_address(0)); } while (0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_unlock(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_write_unlock(lock);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_unlock(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_read_unlock(lock);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_read_unlock(lock);
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_unlock_irq(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_read_unlock(lock);
 do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_read_unlock_bh(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_read_unlock(lock);
 __local_bh_enable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_unlock_irqrestore(rwlock_t *lock,
          unsigned long flags)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_write_unlock(lock);
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_unlock_irq(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_write_unlock(lock);
 do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __raw_write_unlock_bh(rwlock_t *lock)
{
 lock_release(&lock->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_write_unlock(lock);
 __local_bh_enable_ip((unsigned long)__builtin_return_address(0), ((2 * (1UL << (0 + 8))) + 0));
}
# 184 "../include/linux/spinlock_api_smp.h" 2
# 313 "../include/linux/spinlock.h" 2
# 324 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) raw_spinlock_t *spinlock_check(spinlock_t *lock)
{
 return &lock->rlock;
}
# 349 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_lock(spinlock_t *lock)
{
 _raw_spin_lock(&lock->rlock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_lock_bh(spinlock_t *lock)
{
 _raw_spin_lock_bh(&lock->rlock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int spin_trylock(spinlock_t *lock)
{
 return (_raw_spin_trylock(&lock->rlock));
}
# 374 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_lock_irq(spinlock_t *lock)
{
 _raw_spin_lock_irq(&lock->rlock);
}
# 389 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_unlock(spinlock_t *lock)
{
 _raw_spin_unlock(&lock->rlock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_unlock_bh(spinlock_t *lock)
{
 _raw_spin_unlock_bh(&lock->rlock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_unlock_irq(spinlock_t *lock)
{
 _raw_spin_unlock_irq(&lock->rlock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
{
 do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _raw_spin_unlock_irqrestore(&lock->rlock, flags); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int spin_trylock_bh(spinlock_t *lock)
{
 return (_raw_spin_trylock_bh(&lock->rlock));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int spin_trylock_irq(spinlock_t *lock)
{
 return ({ do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0); (_raw_spin_trylock(&lock->rlock)) ? 1 : ({ do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0); 0; }); });
}
# 442 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int spin_is_locked(spinlock_t *lock)
{
 return ((&(&lock->rlock)->raw_lock)->slock == 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int spin_is_contended(spinlock_t *lock)
{
 return (((void)(&(&lock->rlock)->raw_lock), 0));
}
# 463 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int spin_needbreak(spinlock_t *lock)
{
 if (!preempt_model_preemptible())
  return 0;

 return spin_is_contended(lock);
}
# 479 "../include/linux/spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rwlock_needbreak(rwlock_t *lock)
{
 if (!preempt_model_preemptible())
  return 0;

 return ((void)(lock), 0);
}
# 500 "../include/linux/spinlock.h"
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);



extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
     unsigned long *flags);



extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock);



extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,
     unsigned long *flags);



int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask,
        size_t max_size, unsigned int cpu_mult,
        gfp_t gfp, const char *name,
        struct lock_class_key *key);
# 533 "../include/linux/spinlock.h"
void free_bucket_spinlocks(spinlock_t *locks);

typedef struct { raw_spinlock_t *lock; ; } class_raw_spinlock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_destructor(class_raw_spinlock_t *_T) { if (_T->lock) { _raw_spin_unlock(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_raw_spinlock_lock_ptr(class_raw_spinlock_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_t class_raw_spinlock_constructor(raw_spinlock_t *l) { class_raw_spinlock_t _t = { .lock = l }, *_T = &_t; _raw_spin_lock(_T->lock); return _t; }



typedef class_raw_spinlock_t class_raw_spinlock_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_try_destructor(class_raw_spinlock_t *p){ class_raw_spinlock_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_t class_raw_spinlock_try_constructor(typeof(((class_raw_spinlock_t*)0)->lock) l) { class_raw_spinlock_t t = ({ class_raw_spinlock_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !((_raw_spin_trylock(_T->lock)))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_raw_spinlock_try_lock_ptr(class_raw_spinlock_t *_T) { return class_raw_spinlock_lock_ptr(_T); }

typedef struct { raw_spinlock_t *lock; ; } class_raw_spinlock_nested_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_nested_destructor(class_raw_spinlock_nested_t *_T) { if (_T->lock) { _raw_spin_unlock(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_raw_spinlock_nested_lock_ptr(class_raw_spinlock_nested_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_nested_t class_raw_spinlock_nested_constructor(raw_spinlock_t *l) { class_raw_spinlock_nested_t _t = { .lock = l }, *_T = &_t; _raw_spin_lock_nested(_T->lock, 1); return _t; }



typedef struct { raw_spinlock_t *lock; ; } class_raw_spinlock_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_irq_destructor(class_raw_spinlock_irq_t *_T) { if (_T->lock) { _raw_spin_unlock_irq(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_raw_spinlock_irq_lock_ptr(class_raw_spinlock_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_irq_t class_raw_spinlock_irq_constructor(raw_spinlock_t *l) { class_raw_spinlock_irq_t _t = { .lock = l }, *_T = &_t; _raw_spin_lock_irq(_T->lock); return _t; }



typedef class_raw_spinlock_irq_t class_raw_spinlock_irq_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_irq_try_destructor(class_raw_spinlock_irq_t *p){ class_raw_spinlock_irq_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_irq_t class_raw_spinlock_irq_try_constructor(typeof(((class_raw_spinlock_irq_t*)0)->lock) l) { class_raw_spinlock_irq_t t = ({ class_raw_spinlock_irq_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !(({ do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0); (_raw_spin_trylock(_T->lock)) ? 1 : ({ do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0); 0; }); }))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_raw_spinlock_irq_try_lock_ptr(class_raw_spinlock_irq_t *_T) { return class_raw_spinlock_irq_lock_ptr(_T); }

typedef struct { raw_spinlock_t *lock; unsigned long flags; } class_raw_spinlock_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_irqsave_destructor(class_raw_spinlock_irqsave_t *_T) { if (_T->lock) { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _raw_spin_unlock_irqrestore(_T->lock, _T->flags); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_raw_spinlock_irqsave_lock_ptr(class_raw_spinlock_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_irqsave_t class_raw_spinlock_irqsave_constructor(raw_spinlock_t *l) { class_raw_spinlock_irqsave_t _t = { .lock = l }, *_T = &_t; do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = _raw_spin_lock_irqsave(_T->lock); } while (0); return _t; }




typedef class_raw_spinlock_irqsave_t class_raw_spinlock_irqsave_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_raw_spinlock_irqsave_try_destructor(class_raw_spinlock_irqsave_t *p){ class_raw_spinlock_irqsave_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_raw_spinlock_irqsave_t class_raw_spinlock_irqsave_try_constructor(typeof(((class_raw_spinlock_irqsave_t*)0)->lock) l) { class_raw_spinlock_irqsave_t t = ({ class_raw_spinlock_irqsave_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !(({ do { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_off(); } while (0); (_raw_spin_trylock(_T->lock)) ? 1 : ({ do { if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(_T->flags); } while (0); } while (0); 0; }); }))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_raw_spinlock_irqsave_try_lock_ptr(class_raw_spinlock_irqsave_t *_T) { return class_raw_spinlock_irqsave_lock_ptr(_T); }


typedef struct { spinlock_t *lock; ; } class_spinlock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_destructor(class_spinlock_t *_T) { if (_T->lock) { spin_unlock(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_spinlock_lock_ptr(class_spinlock_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_t class_spinlock_constructor(spinlock_t *l) { class_spinlock_t _t = { .lock = l }, *_T = &_t; spin_lock(_T->lock); return _t; }



typedef class_spinlock_t class_spinlock_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_try_destructor(class_spinlock_t *p){ class_spinlock_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_t class_spinlock_try_constructor(typeof(((class_spinlock_t*)0)->lock) l) { class_spinlock_t t = ({ class_spinlock_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !(spin_trylock(_T->lock))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_spinlock_try_lock_ptr(class_spinlock_t *_T) { return class_spinlock_lock_ptr(_T); }

typedef struct { spinlock_t *lock; ; } class_spinlock_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_irq_destructor(class_spinlock_irq_t *_T) { if (_T->lock) { spin_unlock_irq(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_spinlock_irq_lock_ptr(class_spinlock_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_irq_t class_spinlock_irq_constructor(spinlock_t *l) { class_spinlock_irq_t _t = { .lock = l }, *_T = &_t; spin_lock_irq(_T->lock); return _t; }



typedef class_spinlock_irq_t class_spinlock_irq_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_irq_try_destructor(class_spinlock_irq_t *p){ class_spinlock_irq_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_irq_t class_spinlock_irq_try_constructor(typeof(((class_spinlock_irq_t*)0)->lock) l) { class_spinlock_irq_t t = ({ class_spinlock_irq_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !(spin_trylock_irq(_T->lock))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_spinlock_irq_try_lock_ptr(class_spinlock_irq_t *_T) { return class_spinlock_irq_lock_ptr(_T); }


typedef struct { spinlock_t *lock; unsigned long flags; } class_spinlock_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_irqsave_destructor(class_spinlock_irqsave_t *_T) { if (_T->lock) { spin_unlock_irqrestore(_T->lock, _T->flags); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_spinlock_irqsave_lock_ptr(class_spinlock_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_irqsave_t class_spinlock_irqsave_constructor(spinlock_t *l) { class_spinlock_irqsave_t _t = { .lock = l }, *_T = &_t; do { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = _raw_spin_lock_irqsave(spinlock_check(_T->lock)); } while (0); } while (0); return _t; }




typedef class_spinlock_irqsave_t class_spinlock_irqsave_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_spinlock_irqsave_try_destructor(class_spinlock_irqsave_t *p){ class_spinlock_irqsave_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_spinlock_irqsave_t class_spinlock_irqsave_try_constructor(typeof(((class_spinlock_irqsave_t*)0)->lock) l) { class_spinlock_irqsave_t t = ({ class_spinlock_irqsave_t _t = { .lock = l }, *_T = &_t; if (_T->lock && !(({ ({ do { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_off(); } while (0); (_raw_spin_trylock(spinlock_check(_T->lock))) ? 1 : ({ do { if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(_T->flags); } while (0); } while (0); 0; }); }); }))) _T->lock = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_spinlock_irqsave_try_lock_ptr(class_spinlock_irqsave_t *_T) { return class_spinlock_irqsave_lock_ptr(_T); }


typedef struct { rwlock_t *lock; ; } class_read_lock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_read_lock_destructor(class_read_lock_t *_T) { if (_T->lock) { _raw_read_unlock(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_read_lock_lock_ptr(class_read_lock_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_read_lock_t class_read_lock_constructor(rwlock_t *l) { class_read_lock_t _t = { .lock = l }, *_T = &_t; _raw_read_lock(_T->lock); return _t; }



typedef struct { rwlock_t *lock; ; } class_read_lock_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_read_lock_irq_destructor(class_read_lock_irq_t *_T) { if (_T->lock) { _raw_read_unlock_irq(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_read_lock_irq_lock_ptr(class_read_lock_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_read_lock_irq_t class_read_lock_irq_constructor(rwlock_t *l) { class_read_lock_irq_t _t = { .lock = l }, *_T = &_t; _raw_read_lock_irq(_T->lock); return _t; }



typedef struct { rwlock_t *lock; unsigned long flags; } class_read_lock_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_read_lock_irqsave_destructor(class_read_lock_irqsave_t *_T) { if (_T->lock) { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _raw_read_unlock_irqrestore(_T->lock, _T->flags); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_read_lock_irqsave_lock_ptr(class_read_lock_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_read_lock_irqsave_t class_read_lock_irqsave_constructor(rwlock_t *l) { class_read_lock_irqsave_t _t = { .lock = l }, *_T = &_t; do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = _raw_read_lock_irqsave(_T->lock); } while (0); return _t; }




typedef struct { rwlock_t *lock; ; } class_write_lock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_write_lock_destructor(class_write_lock_t *_T) { if (_T->lock) { _raw_write_unlock(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_write_lock_lock_ptr(class_write_lock_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_write_lock_t class_write_lock_constructor(rwlock_t *l) { class_write_lock_t _t = { .lock = l }, *_T = &_t; _raw_write_lock(_T->lock); return _t; }



typedef struct { rwlock_t *lock; ; } class_write_lock_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_write_lock_irq_destructor(class_write_lock_irq_t *_T) { if (_T->lock) { _raw_write_unlock_irq(_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_write_lock_irq_lock_ptr(class_write_lock_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_write_lock_irq_t class_write_lock_irq_constructor(rwlock_t *l) { class_write_lock_irq_t _t = { .lock = l }, *_T = &_t; _raw_write_lock_irq(_T->lock); return _t; }



typedef struct { rwlock_t *lock; unsigned long flags; } class_write_lock_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_write_lock_irqsave_destructor(class_write_lock_irqsave_t *_T) { if (_T->lock) { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _raw_write_unlock_irqrestore(_T->lock, _T->flags); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_write_lock_irqsave_lock_ptr(class_write_lock_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_write_lock_irqsave_t class_write_lock_irqsave_constructor(rwlock_t *l) { class_write_lock_irqsave_t _t = { .lock = l }, *_T = &_t; do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = _raw_write_lock_irqsave(_T->lock); } while (0); return _t; }
# 9 "../include/linux/mmzone.h" 2

# 1 "../include/linux/list_nulls.h" 1
# 21 "../include/linux/list_nulls.h"
struct hlist_nulls_head {
 struct hlist_nulls_node *first;
};

struct hlist_nulls_node {
 struct hlist_nulls_node *next, **pprev;
};
# 43 "../include/linux/list_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_a_nulls(const struct hlist_nulls_node *ptr)
{
 return ((unsigned long)ptr & 1);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_nulls_value(const struct hlist_nulls_node *ptr)
{
 return ((unsigned long)ptr) >> 1;
}
# 67 "../include/linux/list_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_nulls_unhashed(const struct hlist_nulls_node *h)
{
 return !h->pprev;
}
# 81 "../include/linux/list_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_nulls_unhashed_lockless(const struct hlist_nulls_node *h)
{
 return !({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_34(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->pprev) == sizeof(char) || sizeof(h->pprev) == sizeof(short) || sizeof(h->pprev) == sizeof(int) || sizeof(h->pprev) == sizeof(long)) || sizeof(h->pprev) == sizeof(long long))) __compiletime_assert_34(); } while (0); (*(const volatile typeof( _Generic((h->pprev), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (h->pprev))) *)&(h->pprev)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hlist_nulls_empty(const struct hlist_nulls_head *h)
{
 return is_a_nulls(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_35(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->first) == sizeof(char) || sizeof(h->first) == sizeof(short) || sizeof(h->first) == sizeof(int) || sizeof(h->first) == sizeof(long)) || sizeof(h->first) == sizeof(long long))) __compiletime_assert_35(); } while (0); (*(const volatile typeof( _Generic((h->first), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (h->first))) *)&(h->first)); }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_add_head(struct hlist_nulls_node *n,
     struct hlist_nulls_head *h)
{
 struct hlist_nulls_node *first = h->first;

 n->next = first;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_36(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_36(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&h->first); } while (0); } while (0);
 h->first = n;
 if (!is_a_nulls(first))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_37(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(first->pprev) == sizeof(char) || sizeof(first->pprev) == sizeof(short) || sizeof(first->pprev) == sizeof(int) || sizeof(first->pprev) == sizeof(long)) || sizeof(first->pprev) == sizeof(long long))) __compiletime_assert_37(); } while (0); do { *(volatile typeof(first->pprev) *)&(first->pprev) = (&n->next); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __hlist_nulls_del(struct hlist_nulls_node *n)
{
 struct hlist_nulls_node *next = n->next;
 struct hlist_nulls_node **pprev = n->pprev;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_38(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pprev) == sizeof(char) || sizeof(*pprev) == sizeof(short) || sizeof(*pprev) == sizeof(int) || sizeof(*pprev) == sizeof(long)) || sizeof(*pprev) == sizeof(long long))) __compiletime_assert_38(); } while (0); do { *(volatile typeof(*pprev) *)&(*pprev) = (next); } while (0); } while (0);
 if (!is_a_nulls(next))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_39(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->pprev) == sizeof(char) || sizeof(next->pprev) == sizeof(short) || sizeof(next->pprev) == sizeof(int) || sizeof(next->pprev) == sizeof(long)) || sizeof(next->pprev) == sizeof(long long))) __compiletime_assert_39(); } while (0); do { *(volatile typeof(next->pprev) *)&(next->pprev) = (pprev); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_del(struct hlist_nulls_node *n)
{
 __hlist_nulls_del(n);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_40(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_40(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (((void *) 0x122 + 0)); } while (0); } while (0);
}
# 11 "../include/linux/mmzone.h" 2
# 1 "../include/linux/wait.h" 1
# 11 "../include/linux/wait.h"
# 1 "./arch/hexagon/include/generated/asm/current.h" 1
# 1 "../include/asm-generic/current.h" 1
# 2 "./arch/hexagon/include/generated/asm/current.h" 2
# 12 "../include/linux/wait.h" 2

typedef struct wait_queue_entry wait_queue_entry_t;

typedef int (*wait_queue_func_t)(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
int default_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
# 28 "../include/linux/wait.h"
struct wait_queue_entry {
 unsigned int flags;
 void *private;
 wait_queue_func_t func;
 struct list_head entry;
};

struct wait_queue_head {
 spinlock_t lock;
 struct list_head head;
};
typedef struct wait_queue_head wait_queue_head_t;

struct task_struct;
# 62 "../include/linux/wait.h"
extern void __init_waitqueue_head(struct wait_queue_head *wq_head, const char *name, struct lock_class_key *);
# 80 "../include/linux/wait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p)
{
 wq_entry->flags = 0;
 wq_entry->private = p;
 wq_entry->func = default_wake_function;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
init_waitqueue_func_entry(struct wait_queue_entry *wq_entry, wait_queue_func_t func)
{
 wq_entry->flags = 0;
 wq_entry->private = ((void *)0);
 wq_entry->func = func;
}
# 125 "../include/linux/wait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int waitqueue_active(struct wait_queue_head *wq_head)
{
 return !list_empty(&wq_head->head);
}
# 138 "../include/linux/wait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool wq_has_single_sleeper(struct wait_queue_head *wq_head)
{
 return list_is_singular(&wq_head->head);
}
# 151 "../include/linux/wait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool wq_has_sleeper(struct wait_queue_head *wq_head)
{







 __asm__ __volatile__("": : :"memory");
 return waitqueue_active(wq_head);
}

extern void add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
extern void add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
extern void add_wait_queue_priority(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
extern void remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
{
 struct list_head *head = &wq_head->head;
 struct wait_queue_entry *wq;

 for (wq = ({ void *__mptr = (void *)((&wq_head->head)->next); _Static_assert(__builtin_types_compatible_p(typeof(*((&wq_head->head)->next)), typeof(((typeof(*wq) *)0)->entry)) || __builtin_types_compatible_p(typeof(*((&wq_head->head)->next)), typeof(void)), "pointer type mismatch in container_of()"); ((typeof(*wq) *)(__mptr - __builtin_offsetof(typeof(*wq), entry))); }); !list_is_head(&wq->entry, (&wq_head->head)); wq = ({ void *__mptr = (void *)((wq)->entry.next); _Static_assert(__builtin_types_compatible_p(typeof(*((wq)->entry.next)), typeof(((typeof(*(wq)) *)0)->entry)) || __builtin_types_compatible_p(typeof(*((wq)->entry.next)), typeof(void)), "pointer type mismatch in container_of()"); ((typeof(*(wq)) *)(__mptr - __builtin_offsetof(typeof(*(wq)), entry))); })) {
  if (!(wq->flags & 0x10))
   break;
  head = &wq->entry;
 }
 list_add(&wq_entry->entry, head);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
{
 wq_entry->flags |= 0x01;
 __add_wait_queue(wq_head, wq_entry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __add_wait_queue_entry_tail(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
{
 list_add_tail(&wq_entry->entry, &wq_head->head);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__add_wait_queue_entry_tail_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
{
 wq_entry->flags |= 0x01;
 __add_wait_queue_entry_tail(wq_head, wq_entry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
{
 list_del(&wq_entry->entry);
}

int __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
void __wake_up_on_current_cpu(struct wait_queue_head *wq_head, unsigned int mode, void *key);
void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);
void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode);
void __wake_up_pollfree(struct wait_queue_head *wq_head);
# 260 "../include/linux/wait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wake_up_pollfree(struct wait_queue_head *wq_head)
{







 if (waitqueue_active(wq_head))
  __wake_up_pollfree(wq_head);
}
# 285 "../include/linux/wait.h"
extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags);
# 770 "../include/linux/wait.h"
extern int do_wait_intr(wait_queue_head_t *, wait_queue_entry_t *);
extern int do_wait_intr_irq(wait_queue_head_t *, wait_queue_entry_t *);
# 1192 "../include/linux/wait.h"
void prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
bool prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
long prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout);
int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
int autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
# 1217 "../include/linux/wait.h"
typedef int (*task_call_f)(struct task_struct *p, void *arg);
extern int task_call_func(struct task_struct *p, task_call_f func, void *arg);
# 12 "../include/linux/mmzone.h" 2





# 1 "../include/linux/seqlock.h" 1
# 19 "../include/linux/seqlock.h"
# 1 "../include/linux/mutex.h" 1
# 14 "../include/linux/mutex.h"
# 1 "./arch/hexagon/include/generated/asm/current.h" 1
# 15 "../include/linux/mutex.h" 2





# 1 "../include/linux/osq_lock.h" 1
# 10 "../include/linux/osq_lock.h"
struct optimistic_spin_queue {




 atomic_t tail;
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void osq_lock_init(struct optimistic_spin_queue *lock)
{
 atomic_set(&lock->tail, (0));
}

extern bool osq_lock(struct optimistic_spin_queue *lock);
extern void osq_unlock(struct optimistic_spin_queue *lock);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool osq_is_locked(struct optimistic_spin_queue *lock)
{
 return atomic_read(&lock->tail) != (0);
}
# 21 "../include/linux/mutex.h" 2


# 1 "../include/linux/mutex_types.h" 1
# 41 "../include/linux/mutex_types.h"
struct mutex {
 atomic_long_t owner;
 raw_spinlock_t wait_lock;



 struct list_head wait_list;

 void *magic;


 struct lockdep_map dep_map;

};
# 24 "../include/linux/mutex.h" 2

struct device;
# 42 "../include/linux/mutex.h"
extern void mutex_destroy(struct mutex *lock);
# 78 "../include/linux/mutex.h"
extern void __mutex_init(struct mutex *lock, const char *name,
    struct lock_class_key *key);







extern bool mutex_is_locked(struct mutex *lock);
# 124 "../include/linux/mutex.h"
int __devm_mutex_init(struct device *dev, struct mutex *lock);
# 152 "../include/linux/mutex.h"
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);

extern int __attribute__((__warn_unused_result__)) mutex_lock_interruptible_nested(struct mutex *lock,
     unsigned int subclass);
extern int __attribute__((__warn_unused_result__)) mutex_lock_killable_nested(struct mutex *lock,
     unsigned int subclass);
extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass);
# 191 "../include/linux/mutex.h"
extern int mutex_trylock(struct mutex *lock);
extern void mutex_unlock(struct mutex *lock);

extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);

typedef struct mutex * class_mutex_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_mutex_destructor(struct mutex * *p) { struct mutex * _T = *p; if (_T) { mutex_unlock(_T); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mutex * class_mutex_constructor(struct mutex * _T) { struct mutex * t = ({ mutex_lock_nested(_T, 0); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_mutex_lock_ptr(class_mutex_t *_T) { return *_T; }
typedef class_mutex_t class_mutex_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_mutex_try_destructor(class_mutex_t *p){ class_mutex_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_mutex_t class_mutex_try_constructor(class_mutex_t _T) { class_mutex_t t = ({ void *_t = _T; if (_T && !(mutex_trylock(_T))) _t = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_mutex_try_lock_ptr(class_mutex_t *_T) { return class_mutex_lock_ptr(_T); }
typedef class_mutex_t class_mutex_intr_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_mutex_intr_destructor(class_mutex_t *p){ class_mutex_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_mutex_t class_mutex_intr_constructor(class_mutex_t _T) { class_mutex_t t = ({ void *_t = _T; if (_T && !(mutex_lock_interruptible_nested(_T, 0) == 0)) _t = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_mutex_intr_lock_ptr(class_mutex_t *_T) { return class_mutex_lock_ptr(_T); }
# 20 "../include/linux/seqlock.h" 2

# 1 "../include/linux/seqlock_types.h" 1
# 33 "../include/linux/seqlock_types.h"
typedef struct seqcount {
 unsigned sequence;

 struct lockdep_map dep_map;

} seqcount_t;
# 68 "../include/linux/seqlock_types.h"
typedef struct seqcount_raw_spinlock { seqcount_t seqcount; raw_spinlock_t *lock; } seqcount_raw_spinlock_t;
typedef struct seqcount_spinlock { seqcount_t seqcount; spinlock_t *lock; } seqcount_spinlock_t;
typedef struct seqcount_rwlock { seqcount_t seqcount; rwlock_t *lock; } seqcount_rwlock_t;
typedef struct seqcount_mutex { seqcount_t seqcount; struct mutex *lock; } seqcount_mutex_t;
# 84 "../include/linux/seqlock_types.h"
typedef struct {




 seqcount_spinlock_t seqcount;
 spinlock_t lock;
} seqlock_t;
# 22 "../include/linux/seqlock.h" 2
# 41 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __seqcount_init(seqcount_t *s, const char *name,
       struct lock_class_key *key)
{



 lockdep_init_map(&s->dep_map, name, key, 0);
 s->sequence = 0;
}
# 66 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seqcount_lockdep_reader_access(const seqcount_t *s)
{
 seqcount_t *l = (seqcount_t *)s;
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 lock_acquire(&l->dep_map, 0, 0, 2, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 lock_release(&l->dep_map, (unsigned long)__builtin_return_address(0));
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
}
# 199 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) seqcount_t *__seqprop_ptr(seqcount_t *s)
{
 return s;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const seqcount_t *__seqprop_const_ptr(const seqcount_t *s)
{
 return s;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned __seqprop_sequence(const seqcount_t *s)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_41(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->sequence) == sizeof(char) || sizeof(s->sequence) == sizeof(short) || sizeof(s->sequence) == sizeof(int) || sizeof(s->sequence) == sizeof(long)) || sizeof(s->sequence) == sizeof(long long))) __compiletime_assert_41(); } while (0); (*(const volatile typeof( _Generic((s->sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->sequence))) *)&(s->sequence)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __seqprop_preemptible(const seqcount_t *s)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __seqprop_assert(const seqcount_t *s)
{
 do { ({ bool __ret_do_once = !!(0 && (debug_locks && !({ typeof(lockdep_recursion) pscr_ret__; do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(lockdep_recursion)) { case 1: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_42(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_42(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_43(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_43(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_44(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_44(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_45(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_45(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; })) && (preempt_count() == 0 && ({ typeof(hardirqs_enabled) pscr_ret__; do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(hardirqs_enabled)) { case 1: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_46(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_46(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_47(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_47(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_48(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_48(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_49(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_49(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; }))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/seqlock.h", 221, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }); } while (0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) seqcount_t * __seqprop_raw_spinlock_ptr(seqcount_raw_spinlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) const seqcount_t * __seqprop_raw_spinlock_const_ptr(const seqcount_raw_spinlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned __seqprop_raw_spinlock_sequence(const seqcount_raw_spinlock_t *s) { unsigned seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_50(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_50(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); if (!0) return seq; if (false && __builtin_expect(!!(seq & 1), 0)) { _raw_spin_lock(s->lock); _raw_spin_unlock(s->lock); seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_51(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_51(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); } return seq; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __seqprop_raw_spinlock_preemptible(const seqcount_raw_spinlock_t *s) { if (!0) return false; return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __seqprop_raw_spinlock_assert(const seqcount_raw_spinlock_t *s) { do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(s->lock)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/seqlock.h", 226, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) seqcount_t * __seqprop_spinlock_ptr(seqcount_spinlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) const seqcount_t * __seqprop_spinlock_const_ptr(const seqcount_spinlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned __seqprop_spinlock_sequence(const seqcount_spinlock_t *s) { unsigned seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_52(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_52(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); if (!0) return seq; if (0 && __builtin_expect(!!(seq & 1), 0)) { spin_lock(s->lock); spin_unlock(s->lock); seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_53(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_53(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); } return seq; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __seqprop_spinlock_preemptible(const seqcount_spinlock_t *s) { if (!0) return 0; return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __seqprop_spinlock_assert(const seqcount_spinlock_t *s) { do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(s->lock)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/seqlock.h", 227, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) seqcount_t * __seqprop_rwlock_ptr(seqcount_rwlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) const seqcount_t * __seqprop_rwlock_const_ptr(const seqcount_rwlock_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned __seqprop_rwlock_sequence(const seqcount_rwlock_t *s) { unsigned seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_54(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_54(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); if (!0) return seq; if (0 && __builtin_expect(!!(seq & 1), 0)) { _raw_read_lock(s->lock); _raw_read_unlock(s->lock); seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_55(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_55(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); } return seq; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __seqprop_rwlock_preemptible(const seqcount_rwlock_t *s) { if (!0) return 0; return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __seqprop_rwlock_assert(const seqcount_rwlock_t *s) { do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(s->lock)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/seqlock.h", 228, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) seqcount_t * __seqprop_mutex_ptr(seqcount_mutex_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) const seqcount_t * __seqprop_mutex_const_ptr(const seqcount_mutex_t *s) { return &s->seqcount; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned __seqprop_mutex_sequence(const seqcount_mutex_t *s) { unsigned seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_56(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_56(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); if (!0) return seq; if (true && __builtin_expect(!!(seq & 1), 0)) { mutex_lock_nested(s->lock, 0); mutex_unlock(s->lock); seq = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_57(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_57(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }); } return seq; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __seqprop_mutex_preemptible(const seqcount_mutex_t *s) { if (!0) return true; return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __seqprop_mutex_assert(const seqcount_mutex_t *s) { do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(s->lock)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/seqlock.h", 229, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0); }
# 380 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int do___read_seqcount_retry(const seqcount_t *s, unsigned start)
{
 kcsan_atomic_next(0);
 return __builtin_expect(!!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_58(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->sequence) == sizeof(char) || sizeof(s->sequence) == sizeof(short) || sizeof(s->sequence) == sizeof(int) || sizeof(s->sequence) == sizeof(long)) || sizeof(s->sequence) == sizeof(long long))) __compiletime_assert_58(); } while (0); (*(const volatile typeof( _Generic((s->sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->sequence))) *)&(s->sequence)); }) != start), 0);
}
# 400 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int do_read_seqcount_retry(const seqcount_t *s, unsigned start)
{
 __asm__ __volatile__("": : :"memory");
 return do___read_seqcount_retry(s, start);
}
# 420 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_raw_write_seqcount_begin(seqcount_t *s)
{
 kcsan_nestable_atomic_begin();
 s->sequence++;
 __asm__ __volatile__("": : :"memory");
}
# 441 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_raw_write_seqcount_end(seqcount_t *s)
{
 __asm__ __volatile__("": : :"memory");
 s->sequence++;
 kcsan_nestable_atomic_end();
}
# 467 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
{
 lock_acquire(&s->dep_map, subclass, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));
 do_raw_write_seqcount_begin(s);
}
# 493 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_write_seqcount_begin(seqcount_t *s)
{
 do_write_seqcount_begin_nested(s, 0);
}
# 513 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_write_seqcount_end(seqcount_t *s)
{
 lock_release(&s->dep_map, (unsigned long)__builtin_return_address(0));
 do_raw_write_seqcount_end(s);
}
# 563 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_raw_write_seqcount_barrier(seqcount_t *s)
{
 kcsan_nestable_atomic_begin();
 s->sequence++;
 __asm__ __volatile__("": : :"memory");
 s->sequence++;
 kcsan_nestable_atomic_end();
}
# 583 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_write_seqcount_invalidate(seqcount_t *s)
{
 __asm__ __volatile__("": : :"memory");
 kcsan_nestable_atomic_begin();
 s->sequence+=2;
 kcsan_nestable_atomic_end();
}
# 601 "../include/linux/seqlock.h"
typedef struct {
 seqcount_t seqcount;
} seqcount_latch_t;
# 630 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned raw_read_seqcount_latch(const seqcount_latch_t *s)
{




 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_59(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_59(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); });
}
# 646 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
{
 __asm__ __volatile__("": : :"memory");
 return __builtin_expect(!!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_60(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(s->seqcount.sequence) == sizeof(char) || sizeof(s->seqcount.sequence) == sizeof(short) || sizeof(s->seqcount.sequence) == sizeof(int) || sizeof(s->seqcount.sequence) == sizeof(long)) || sizeof(s->seqcount.sequence) == sizeof(long long))) __compiletime_assert_60(); } while (0); (*(const volatile typeof( _Generic((s->seqcount.sequence), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (s->seqcount.sequence))) *)&(s->seqcount.sequence)); }) != start), 0);
}
# 734 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void raw_write_seqcount_latch(seqcount_latch_t *s)
{
 __asm__ __volatile__("": : :"memory");
 s->seqcount.sequence++;
 __asm__ __volatile__("": : :"memory");
}
# 770 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned read_seqbegin(const seqlock_t *sl)
{
 unsigned ret = ({ seqcount_lockdep_reader_access(_Generic(*(&sl->seqcount), seqcount_t: __seqprop_const_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_const_ptr, seqcount_spinlock_t: __seqprop_spinlock_const_ptr, seqcount_rwlock_t: __seqprop_rwlock_const_ptr, seqcount_mutex_t: __seqprop_mutex_const_ptr)(&sl->seqcount)); ({ unsigned _seq = ({ unsigned __seq; while ((__seq = _Generic(*(&sl->seqcount), seqcount_t: __seqprop_sequence, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_sequence, seqcount_spinlock_t: __seqprop_spinlock_sequence, seqcount_rwlock_t: __seqprop_rwlock_sequence, seqcount_mutex_t: __seqprop_mutex_sequence)(&sl->seqcount)) & 1) __vmyield(); kcsan_atomic_next(1000); __seq; }); __asm__ __volatile__("": : :"memory"); _seq; }); });

 kcsan_atomic_next(0);
 kcsan_flat_atomic_begin();
 return ret;
}
# 790 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned read_seqretry(const seqlock_t *sl, unsigned start)
{




 kcsan_flat_atomic_end();

 return do_read_seqcount_retry(_Generic(*(&sl->seqcount), seqcount_t: __seqprop_const_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_const_ptr, seqcount_spinlock_t: __seqprop_spinlock_const_ptr, seqcount_rwlock_t: __seqprop_rwlock_const_ptr, seqcount_mutex_t: __seqprop_mutex_const_ptr)(&sl->seqcount), start);
}
# 820 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_seqlock(seqlock_t *sl)
{
 spin_lock(&sl->lock);
 do_write_seqcount_begin(&sl->seqcount.seqcount);
}
# 833 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_sequnlock(seqlock_t *sl)
{
 do_write_seqcount_end(&sl->seqcount.seqcount);
 spin_unlock(&sl->lock);
}
# 846 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_seqlock_bh(seqlock_t *sl)
{
 spin_lock_bh(&sl->lock);
 do_write_seqcount_begin(&sl->seqcount.seqcount);
}
# 860 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_sequnlock_bh(seqlock_t *sl)
{
 do_write_seqcount_end(&sl->seqcount.seqcount);
 spin_unlock_bh(&sl->lock);
}
# 873 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_seqlock_irq(seqlock_t *sl)
{
 spin_lock_irq(&sl->lock);
 do_write_seqcount_begin(&sl->seqcount.seqcount);
}
# 886 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_sequnlock_irq(seqlock_t *sl)
{
 do_write_seqcount_end(&sl->seqcount.seqcount);
 spin_unlock_irq(&sl->lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __write_seqlock_irqsave(seqlock_t *sl)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = _raw_spin_lock_irqsave(spinlock_check(&sl->lock)); } while (0); } while (0);
 do_write_seqcount_begin(&sl->seqcount.seqcount);
 return flags;
}
# 923 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
{
 do_write_seqcount_end(&sl->seqcount.seqcount);
 spin_unlock_irqrestore(&sl->lock, flags);
}
# 946 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_seqlock_excl(seqlock_t *sl)
{
 spin_lock(&sl->lock);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_sequnlock_excl(seqlock_t *sl)
{
 spin_unlock(&sl->lock);
}
# 969 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_seqlock_excl_bh(seqlock_t *sl)
{
 spin_lock_bh(&sl->lock);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_sequnlock_excl_bh(seqlock_t *sl)
{
 spin_unlock_bh(&sl->lock);
}
# 993 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_seqlock_excl_irq(seqlock_t *sl)
{
 spin_lock_irq(&sl->lock);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_sequnlock_excl_irq(seqlock_t *sl)
{
 spin_unlock_irq(&sl->lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = _raw_spin_lock_irqsave(spinlock_check(&sl->lock)); } while (0); } while (0);
 return flags;
}
# 1036 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
{
 spin_unlock_irqrestore(&sl->lock, flags);
}
# 1073 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
{
 if (!(*seq & 1))
  *seq = read_seqbegin(lock);
 else
  read_seqlock_excl(lock);
}
# 1088 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int need_seqretry(seqlock_t *lock, int seq)
{
 return !(seq & 1) && read_seqretry(lock, seq);
}
# 1101 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void done_seqretry(seqlock_t *lock, int seq)
{
 if (seq & 1)
  read_sequnlock_excl(lock);
}
# 1127 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
{
 unsigned long flags = 0;

 if (!(*seq & 1))
  *seq = read_seqbegin(lock);
 else
  do { flags = __read_seqlock_excl_irqsave(lock); } while (0);

 return flags;
}
# 1152 "../include/linux/seqlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
{
 if (seq & 1)
  read_sequnlock_excl_irqrestore(lock, flags);
}
# 18 "../include/linux/mmzone.h" 2
# 1 "../include/linux/nodemask.h" 1
# 96 "../include/linux/nodemask.h"
# 1 "../include/linux/nodemask_types.h" 1







typedef struct { unsigned long bits[((((1 << 0)) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))]; } nodemask_t;
# 97 "../include/linux/nodemask.h" 2

# 1 "../include/linux/random.h" 1
# 10 "../include/linux/random.h"
# 1 "../include/uapi/linux/random.h" 1
# 12 "../include/uapi/linux/random.h"
# 1 "../include/uapi/linux/ioctl.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/ioctl.h" 1
# 1 "../include/asm-generic/ioctl.h" 1




# 1 "../include/uapi/asm-generic/ioctl.h" 1
# 6 "../include/asm-generic/ioctl.h" 2





extern unsigned int __invalid_size_argument_for_IOC;
# 2 "./arch/hexagon/include/generated/uapi/asm/ioctl.h" 2
# 6 "../include/uapi/linux/ioctl.h" 2
# 13 "../include/uapi/linux/random.h" 2
# 1 "../include/linux/irqnr.h" 1




# 1 "../include/uapi/linux/irqnr.h" 1
# 6 "../include/linux/irqnr.h" 2


extern int nr_irqs;
extern struct irq_desc *irq_to_desc(unsigned int irq);
unsigned int irq_get_next_irq(unsigned int offset);
# 14 "../include/uapi/linux/random.h" 2
# 41 "../include/uapi/linux/random.h"
struct rand_pool_info {
 int entropy_count;
 int buf_size;
 __u32 buf[];
};
# 66 "../include/uapi/linux/random.h"
struct vgetrandom_opaque_params {
 __u32 size_of_opaque_state;
 __u32 mmap_prot;
 __u32 mmap_flags;
 __u32 reserved[13];
};
# 11 "../include/linux/random.h" 2

struct notifier_block;

void add_device_randomness(const void *buf, size_t len);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) add_bootloader_randomness(const void *buf, size_t len);
void add_input_randomness(unsigned int type, unsigned int code,
     unsigned int value) ;
void add_interrupt_randomness(int irq) ;
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void add_latent_entropy(void)
{



 add_device_randomness(((void *)0), 0);

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_random_vmfork_notifier(struct notifier_block *nb) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unregister_random_vmfork_notifier(struct notifier_block *nb) { return 0; }


void get_random_bytes(void *buf, size_t len);
u8 get_random_u8(void);
u16 get_random_u16(void);
u32 get_random_u32(void);
u64 get_random_u64(void);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_random_long(void)
{



 return get_random_u32();

}

u32 __get_random_u32_below(u32 ceil);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 get_random_u32_below(u32 ceil)
{
 if (!__builtin_constant_p(ceil))
  return __get_random_u32_below(ceil);
# 73 "../include/linux/random.h"
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_61(void) __attribute__((__error__("get_random_u32_below() must take ceil > 0"))); if (!(!(!ceil))) __compiletime_assert_61(); } while (0);
 if (ceil <= 1)
  return 0;
 for (;;) {
  if (ceil <= 1U << 8) {
   u32 mult = ceil * get_random_u8();
   if (__builtin_expect(!!(is_power_of_2(ceil) || (u8)mult >= (1U << 8) % ceil), 1))
    return mult >> 8;
  } else if (ceil <= 1U << 16) {
   u32 mult = ceil * get_random_u16();
   if (__builtin_expect(!!(is_power_of_2(ceil) || (u16)mult >= (1U << 16) % ceil), 1))
    return mult >> 16;
  } else {
   u64 mult = (u64)ceil * get_random_u32();
   if (__builtin_expect(!!(is_power_of_2(ceil) || (u32)mult >= -ceil % ceil), 1))
    return mult >> 32;
  }
 }
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 get_random_u32_above(u32 floor)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_62(void) __attribute__((__error__("get_random_u32_above() must take floor < U32_MAX"))); if (!(!(__builtin_constant_p(floor) && floor == ((u32)~0U)))) __compiletime_assert_62(); } while (0);

 return floor + 1 + get_random_u32_below(((u32)~0U) - floor);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 get_random_u32_inclusive(u32 floor, u32 ceil)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_63(void) __attribute__((__error__("get_random_u32_inclusive() must take floor <= ceil"))); if (!(!(__builtin_constant_p(floor) && __builtin_constant_p(ceil) && (floor > ceil || ceil - floor == ((u32)~0U))))) __compiletime_assert_63(); } while (0);


 return floor + get_random_u32_below(ceil - floor + 1);
}

void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) random_init_early(const char *command_line);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) random_init(void);
bool rng_is_initialized(void);
int wait_for_random_bytes(void);
int execute_with_initialized_rng(struct notifier_block *nb);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_bytes_wait(void *buf, size_t nbytes)
{
 int ret = wait_for_random_bytes();
 get_random_bytes(buf, nbytes);
 return ret;
}
# 141 "../include/linux/random.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_u8_wait(u8 *out) { int ret = wait_for_random_bytes(); if (__builtin_expect(!!(ret), 0)) return ret; *out = get_random_u8(); return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_u16_wait(u16 *out) { int ret = wait_for_random_bytes(); if (__builtin_expect(!!(ret), 0)) return ret; *out = get_random_u16(); return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_u32_wait(u32 *out) { int ret = wait_for_random_bytes(); if (__builtin_expect(!!(ret), 0)) return ret; *out = get_random_u32(); return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_u64_wait(u32 *out) { int ret = wait_for_random_bytes(); if (__builtin_expect(!!(ret), 0)) return ret; *out = get_random_u64(); return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_random_long_wait(unsigned long *out) { int ret = wait_for_random_bytes(); if (__builtin_expect(!!(ret), 0)) return ret; *out = get_random_long(); return 0; }







# 1 "../include/linux/prandom.h" 1
# 12 "../include/linux/prandom.h"
# 1 "../include/linux/once.h" 1





# 1 "../include/linux/jump_label.h" 1
# 79 "../include/linux/jump_label.h"
extern bool static_key_initialized;





struct static_key {
 atomic_t enabled;
# 107 "../include/linux/jump_label.h"
};
# 191 "../include/linux/jump_label.h"
enum jump_label_type {
 JUMP_LABEL_NOP = 0,
 JUMP_LABEL_JMP,
};

struct module;
# 259 "../include/linux/jump_label.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int static_key_count(struct static_key *key)
{
 return raw_atomic_read(&key->enabled);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void jump_label_init(void)
{
 static_key_initialized = true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void jump_label_init_ro(void) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool static_key_false(struct static_key *key)
{
 if (__builtin_expect(!!(static_key_count(key) > 0), 0))
  return true;
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool static_key_true(struct static_key *key)
{
 if (__builtin_expect(!!(static_key_count(key) > 0), 1))
  return true;
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool static_key_fast_inc_not_disabled(struct static_key *key)
{
 int v;

 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 289, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });




 v = atomic_read(&key->enabled);
 do {
  if (v < 0 || (v + 1) < 0)
   return false;
 } while (!__builtin_expect(!!(atomic_try_cmpxchg(&key->enabled, &v, v + 1)), 1));
 return true;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void static_key_slow_dec(struct static_key *key)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 305, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 atomic_dec(&key->enabled);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int jump_label_text_reserved(void *start, void *end)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void jump_label_lock(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void jump_label_unlock(void) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void static_key_enable(struct static_key *key)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 322, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });

 if (atomic_read(&key->enabled) != 0) {
  ({ bool __ret_do_once = !!(atomic_read(&key->enabled) != 1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 325, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return;
 }
 atomic_set(&key->enabled, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void static_key_disable(struct static_key *key)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 333, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });

 if (atomic_read(&key->enabled) != 1) {
  ({ bool __ret_do_once = !!(atomic_read(&key->enabled) != 0); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label.h", 336, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return;
 }
 atomic_set(&key->enabled, 0);
}
# 362 "../include/linux/jump_label.h"
struct static_key_true {
 struct static_key key;
};

struct static_key_false {
 struct static_key key;
};
# 416 "../include/linux/jump_label.h"
extern bool ____wrong_branch_error(void);
# 7 "../include/linux/once.h" 2




bool __do_once_start(bool *done, unsigned long *flags);
void __do_once_done(bool *done, struct static_key_true *once_key,
      unsigned long *flags, struct module *mod);


bool __do_once_sleepable_start(bool *done);
void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
         struct module *mod);
# 13 "../include/linux/prandom.h" 2
# 1 "../include/linux/random.h" 1
# 14 "../include/linux/prandom.h" 2

struct rnd_state {
 __u32 s1, s2, s3, s4;
};

u32 prandom_u32_state(struct rnd_state *state);
void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
void prandom_seed_full_state(struct rnd_state *pcpu_state);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __seed(u32 x, u32 m)
{
 return (x < m) ? x + m : x;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void prandom_seed_state(struct rnd_state *state, u64 seed)
{
 u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;

 state->s1 = __seed(i, 2U);
 state->s2 = __seed(i, 8U);
 state->s3 = __seed(i, 16U);
 state->s4 = __seed(i, 128U);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 next_pseudo_random32(u32 seed)
{
 return seed * 1664525 + 1013904223;
}
# 154 "../include/linux/random.h" 2







extern const struct file_operations random_fops, urandom_fops;
# 99 "../include/linux/nodemask.h" 2

extern nodemask_t _unused_nodemask_arg_;
# 110 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __nodemask_pr_numnodes(const nodemask_t *m)
{
 return m ? (1 << 0) : 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const unsigned long *__nodemask_pr_bits(const nodemask_t *m)
{
 return m ? m->bits : ((void *)0);
}
# 129 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __node_set(int node, volatile nodemask_t *dstp)
{
 set_bit(node, dstp->bits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __node_clear(int node, volatile nodemask_t *dstp)
{
 clear_bit(node, dstp->bits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_setall(nodemask_t *dstp, unsigned int nbits)
{
 bitmap_fill(dstp->bits, nbits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_clear(nodemask_t *dstp, unsigned int nbits)
{
 bitmap_zero(dstp->bits, nbits);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __node_test_and_set(int node, nodemask_t *addr)
{
 return test_and_set_bit(node, addr->bits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_and(nodemask_t *dstp, const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_or(nodemask_t *dstp, const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 bitmap_or(dstp->bits, src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_xor(nodemask_t *dstp, const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_andnot(nodemask_t *dstp, const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_complement(nodemask_t *dstp,
     const nodemask_t *srcp, unsigned int nbits)
{
 bitmap_complement(dstp->bits, srcp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __nodes_equal(const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 return bitmap_equal(src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __nodes_intersects(const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 return bitmap_intersects(src1p->bits, src2p->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __nodes_subset(const nodemask_t *src1p,
     const nodemask_t *src2p, unsigned int nbits)
{
 return bitmap_subset(src1p->bits, src2p->bits, nbits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __nodes_empty(const nodemask_t *srcp, unsigned int nbits)
{
 return bitmap_empty(srcp->bits, nbits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __nodes_full(const nodemask_t *srcp, unsigned int nbits)
{
 return bitmap_full(srcp->bits, nbits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __nodes_weight(const nodemask_t *srcp, unsigned int nbits)
{
 return bitmap_weight(srcp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_shift_right(nodemask_t *dstp,
     const nodemask_t *srcp, int n, int nbits)
{
 bitmap_shift_right(dstp->bits, srcp->bits, n, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_shift_left(nodemask_t *dstp,
     const nodemask_t *srcp, int n, int nbits)
{
 bitmap_shift_left(dstp->bits, srcp->bits, n, nbits);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __first_node(const nodemask_t *srcp)
{
 return ({ unsigned int __UNIQUE_ID_x_64 = ((1 << 0)); unsigned int __UNIQUE_ID_y_65 = (find_first_bit(srcp->bits, (1 << 0))); ((__UNIQUE_ID_x_64) < (__UNIQUE_ID_y_65) ? (__UNIQUE_ID_x_64) : (__UNIQUE_ID_y_65)); });
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __next_node(int n, const nodemask_t *srcp)
{
 return ({ unsigned int __UNIQUE_ID_x_66 = ((1 << 0)); unsigned int __UNIQUE_ID_y_67 = (find_next_bit(srcp->bits, (1 << 0), n+1)); ((__UNIQUE_ID_x_66) < (__UNIQUE_ID_y_67) ? (__UNIQUE_ID_x_66) : (__UNIQUE_ID_y_67)); });
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __next_node_in(int node, const nodemask_t *srcp)
{
 unsigned int ret = __next_node(node, srcp);

 if (ret == (1 << 0))
  ret = __first_node(srcp);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_nodemask_of_node(nodemask_t *mask, int node)
{
 __nodes_clear(&(*mask), (1 << 0));
 __node_set((node), &(*mask));
}
# 307 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __first_unset_node(const nodemask_t *maskp)
{
 return ({ unsigned int __UNIQUE_ID_x_68 = ((1 << 0)); unsigned int __UNIQUE_ID_y_69 = (find_first_zero_bit(maskp->bits, (1 << 0))); ((__UNIQUE_ID_x_68) < (__UNIQUE_ID_y_69) ? (__UNIQUE_ID_x_68) : (__UNIQUE_ID_y_69)); });

}
# 341 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __nodemask_parse_user(const char *buf, int len,
     nodemask_t *dstp, int nbits)
{
 return bitmap_parse_user(buf, len, dstp->bits, nbits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __nodelist_parse(const char *buf, nodemask_t *dstp, int nbits)
{
 return bitmap_parselist(buf, dstp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __node_remap(int oldbit,
  const nodemask_t *oldp, const nodemask_t *newp, int nbits)
{
 return bitmap_bitremap(oldbit, oldp->bits, newp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_remap(nodemask_t *dstp, const nodemask_t *srcp,
  const nodemask_t *oldp, const nodemask_t *newp, int nbits)
{
 bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_onto(nodemask_t *dstp, const nodemask_t *origp,
  const nodemask_t *relmapp, int nbits)
{
 bitmap_onto(dstp->bits, origp->bits, relmapp->bits, nbits);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp,
  int sz, int nbits)
{
 bitmap_fold(dstp->bits, origp->bits, sz, nbits);
}
# 398 "../include/linux/nodemask.h"
enum node_states {
 N_POSSIBLE,
 N_ONLINE,
 N_NORMAL_MEMORY,



 N_HIGH_MEMORY = N_NORMAL_MEMORY,

 N_MEMORY,
 N_CPU,
 N_GENERIC_INITIATOR,
 NR_NODE_STATES
};






extern nodemask_t node_states[NR_NODE_STATES];
# 472 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int node_state(int node, enum node_states state)
{
 return node == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_set_state(int node, enum node_states state)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_clear_state(int node, enum node_states state)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int num_node_state(enum node_states state)
{
 return 1;
}
# 505 "../include/linux/nodemask.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int node_random(const nodemask_t *maskp)
{
# 524 "../include/linux/nodemask.h"
 return 0;

}
# 554 "../include/linux/nodemask.h"
struct nodemask_scratch {
 nodemask_t mask1;
 nodemask_t mask2;
};
# 19 "../include/linux/mmzone.h" 2
# 1 "../include/linux/pageblock-flags.h" 1
# 18 "../include/linux/pageblock-flags.h"
enum pageblock_bits {
 PB_migrate,
 PB_migrate_end = PB_migrate + 3 - 1,

 PB_migrate_skip,





 NR_PAGEBLOCK_BITS
};
# 66 "../include/linux/pageblock-flags.h"
struct page;

unsigned long get_pfnblock_flags_mask(const struct page *page,
    unsigned long pfn,
    unsigned long mask);

void set_pfnblock_flags_mask(struct page *page,
    unsigned long flags,
    unsigned long pfn,
    unsigned long mask);
# 20 "../include/linux/mmzone.h" 2
# 1 "../include/linux/page-flags-layout.h" 1





# 1 "./include/generated/bounds.h" 1
# 7 "../include/linux/page-flags-layout.h" 2
# 21 "../include/linux/mmzone.h" 2

# 1 "../include/linux/mm_types.h" 1




# 1 "../include/linux/mm_types_task.h" 1
# 28 "../include/linux/mm_types_task.h"
enum {
 MM_FILEPAGES,
 MM_ANONPAGES,
 MM_SWAPENTS,
 MM_SHMEMPAGES,
 NR_MM_COUNTERS
};

struct page;

struct page_frag {
 struct page *page;




 __u16 offset;
 __u16 size;

};


struct tlbflush_unmap_batch {
# 71 "../include/linux/mm_types_task.h"
};
# 6 "../include/linux/mm_types.h" 2

# 1 "../include/linux/auxvec.h" 1




# 1 "../include/uapi/linux/auxvec.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/auxvec.h" 1
# 1 "../include/uapi/asm-generic/auxvec.h" 1
# 2 "./arch/hexagon/include/generated/uapi/asm/auxvec.h" 2
# 6 "../include/uapi/linux/auxvec.h" 2
# 6 "../include/linux/auxvec.h" 2
# 8 "../include/linux/mm_types.h" 2
# 1 "../include/linux/kref.h" 1
# 17 "../include/linux/kref.h"
# 1 "../include/linux/refcount.h" 1
# 99 "../include/linux/refcount.h"
# 1 "../include/linux/refcount_types.h" 1
# 15 "../include/linux/refcount_types.h"
typedef struct refcount_struct {
 atomic_t refs;
} refcount_t;
# 100 "../include/linux/refcount.h" 2


struct mutex;





enum refcount_saturation_type {
 REFCOUNT_ADD_NOT_ZERO_OVF,
 REFCOUNT_ADD_OVF,
 REFCOUNT_ADD_UAF,
 REFCOUNT_SUB_UAF,
 REFCOUNT_DEC_LEAK,
};

void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refcount_set(refcount_t *r, int n)
{
 atomic_set(&r->refs, n);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int refcount_read(const refcount_t *r)
{
 return atomic_read(&r->refs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)
{
 int old = refcount_read(r);

 do {
  if (!old)
   break;
 } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));

 if (oldp)
  *oldp = old;

 if (__builtin_expect(!!(old < 0 || old + i < 0), 0))
  refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF);

 return old;
}
# 176 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool refcount_add_not_zero(int i, refcount_t *r)
{
 return __refcount_add_not_zero(i, r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void __refcount_add(int i, refcount_t *r, int *oldp)
{
 int old = atomic_fetch_add_relaxed(i, &r->refs);

 if (oldp)
  *oldp = old;

 if (__builtin_expect(!!(!old), 0))
  refcount_warn_saturate(r, REFCOUNT_ADD_UAF);
 else if (__builtin_expect(!!(old < 0 || old + i < 0), 0))
  refcount_warn_saturate(r, REFCOUNT_ADD_OVF);
}
# 211 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refcount_add(int i, refcount_t *r)
{
 __refcount_add(i, r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool __refcount_inc_not_zero(refcount_t *r, int *oldp)
{
 return __refcount_add_not_zero(1, r, oldp);
}
# 234 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool refcount_inc_not_zero(refcount_t *r)
{
 return __refcount_inc_not_zero(r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __refcount_inc(refcount_t *r, int *oldp)
{
 __refcount_add(1, r, oldp);
}
# 256 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refcount_inc(refcount_t *r)
{
 __refcount_inc(r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp)
{
 int old = atomic_fetch_sub_release(i, &r->refs);

 if (oldp)
  *oldp = old;

 if (old > 0 && old == i) {
  __asm__ __volatile__("": : :"memory");
  return true;
 }

 if (__builtin_expect(!!(old <= 0 || old - i < 0), 0))
  refcount_warn_saturate(r, REFCOUNT_SUB_UAF);

 return false;
}
# 300 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool refcount_sub_and_test(int i, refcount_t *r)
{
 return __refcount_sub_and_test(i, r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool __refcount_dec_and_test(refcount_t *r, int *oldp)
{
 return __refcount_sub_and_test(1, r, oldp);
}
# 323 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool refcount_dec_and_test(refcount_t *r)
{
 return __refcount_dec_and_test(r, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __refcount_dec(refcount_t *r, int *oldp)
{
 int old = atomic_fetch_sub_release(1, &r->refs);

 if (oldp)
  *oldp = old;

 if (__builtin_expect(!!(old <= 1), 0))
  refcount_warn_saturate(r, REFCOUNT_DEC_LEAK);
}
# 349 "../include/linux/refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refcount_dec(refcount_t *r)
{
 __refcount_dec(r, ((void *)0));
}

extern __attribute__((__warn_unused_result__)) bool refcount_dec_if_one(refcount_t *r);
extern __attribute__((__warn_unused_result__)) bool refcount_dec_not_one(refcount_t *r);
extern __attribute__((__warn_unused_result__)) bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) ;
extern __attribute__((__warn_unused_result__)) bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) ;
extern __attribute__((__warn_unused_result__)) bool refcount_dec_and_lock_irqsave(refcount_t *r,
             spinlock_t *lock,
             unsigned long *flags) ;
# 18 "../include/linux/kref.h" 2

struct kref {
 refcount_t refcount;
};







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kref_init(struct kref *kref)
{
 refcount_set(&kref->refcount, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int kref_read(const struct kref *kref)
{
 return refcount_read(&kref->refcount);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kref_get(struct kref *kref)
{
 refcount_inc(&kref->refcount);
}
# 62 "../include/linux/kref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kref_put(struct kref *kref, void (*release)(struct kref *kref))
{
 if (refcount_dec_and_test(&kref->refcount)) {
  release(kref);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kref_put_mutex(struct kref *kref,
     void (*release)(struct kref *kref),
     struct mutex *lock)
{
 if (refcount_dec_and_mutex_lock(&kref->refcount, lock)) {
  release(kref);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kref_put_lock(struct kref *kref,
    void (*release)(struct kref *kref),
    spinlock_t *lock)
{
 if (refcount_dec_and_lock(&kref->refcount, lock)) {
  release(kref);
  return 1;
 }
 return 0;
}
# 109 "../include/linux/kref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kref_get_unless_zero(struct kref *kref)
{
 return refcount_inc_not_zero(&kref->refcount);
}
# 9 "../include/linux/mm_types.h" 2


# 1 "../include/linux/rbtree.h" 1
# 21 "../include/linux/rbtree.h"
# 1 "../include/linux/rbtree_types.h" 1




struct rb_node {
 unsigned long __rb_parent_color;
 struct rb_node *rb_right;
 struct rb_node *rb_left;
} __attribute__((aligned(sizeof(long))));


struct rb_root {
 struct rb_node *rb_node;
};
# 26 "../include/linux/rbtree_types.h"
struct rb_root_cached {
 struct rb_root rb_root;
 struct rb_node *rb_leftmost;
};
# 22 "../include/linux/rbtree.h" 2


# 1 "../include/linux/rcupdate.h" 1
# 32 "../include/linux/rcupdate.h"
# 1 "../include/linux/context_tracking_irq.h" 1
# 13 "../include/linux/context_tracking_irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_irq_enter(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_irq_exit(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_irq_enter_irqson(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_irq_exit_irqson(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_nmi_enter(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ct_nmi_exit(void) { }
# 33 "../include/linux/rcupdate.h" 2





void call_rcu(struct callback_head *head, rcu_callback_t func);
void rcu_barrier_tasks(void);
void rcu_barrier_tasks_rude(void);
void synchronize_rcu(void);

struct rcu_gp_oldstate;
unsigned long get_completed_synchronize_rcu(void);
void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
# 63 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool same_state_synchronize_rcu(unsigned long oldstate1, unsigned long oldstate2)
{
 return oldstate1 == oldstate2;
}
# 89 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __rcu_read_lock(void)
{
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __rcu_read_unlock(void)
{
 __asm__ __volatile__("": : :"memory");
 if (0)
  do { } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_preempt_depth(void)
{
 return 0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void call_rcu_hurry(struct callback_head *head, rcu_callback_t func)
{
 call_rcu(head, func);
}



void rcu_init(void);
extern int rcu_scheduler_active;
void rcu_sched_clock_irq(int user);


void rcu_init_tasks_generic(void);
# 132 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_sysrq_start(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_sysrq_end(void) { }





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_irq_work_resched(void) { }
# 148 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_init_nohz(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_nocb_cpu_offload(int cpu) { return -22; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_nocb_cpu_deoffload(int cpu) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_nocb_flush_deferred_wakeup(void) { }
# 179 "../include/linux/rcupdate.h"
u8 rcu_trc_cmpxchg_need_qs(struct task_struct *t, u8 old, u8 new);
void rcu_tasks_trace_qs_blkd(struct task_struct *t);
# 210 "../include/linux/rcupdate.h"
void exit_tasks_rcu_start(void);
void exit_tasks_rcu_finish(void);
# 232 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_trace_implies_rcu_gp(void) { return true; }
# 286 "../include/linux/rcupdate.h"
# 1 "../include/linux/rcutiny.h" 1
# 17 "../include/linux/rcutiny.h"
struct rcu_gp_oldstate {
 unsigned long rgos_norm;
};
# 29 "../include/linux/rcutiny.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool same_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp1,
         struct rcu_gp_oldstate *rgosp2)
{
 return rgosp1->rgos_norm == rgosp2->rgos_norm;
}

unsigned long get_state_synchronize_rcu(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp)
{
 rgosp->rgos_norm = get_state_synchronize_rcu();
}

unsigned long start_poll_synchronize_rcu(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp)
{
 rgosp->rgos_norm = start_poll_synchronize_rcu();
}

bool poll_state_synchronize_rcu(unsigned long oldstate);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp)
{
 return poll_state_synchronize_rcu(rgosp->rgos_norm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cond_synchronize_rcu(unsigned long oldstate)
{
 do { do { } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp)
{
 cond_synchronize_rcu(rgosp->rgos_norm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long start_poll_synchronize_rcu_expedited(void)
{
 return start_poll_synchronize_rcu();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp)
{
 rgosp->rgos_norm = start_poll_synchronize_rcu_expedited();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cond_synchronize_rcu_expedited(unsigned long oldstate)
{
 cond_synchronize_rcu(oldstate);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cond_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp)
{
 cond_synchronize_rcu_expedited(rgosp->rgos_norm);
}

extern void rcu_barrier(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void synchronize_rcu_expedited(void)
{
 synchronize_rcu();
}







extern void kvfree(const void *addr);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kvfree_call_rcu(struct callback_head *head, void *ptr)
{
 if (head) {
  call_rcu(head, (rcu_callback_t) ((void *) head - ptr));
  return;
 }


 do { do { } while (0); } while (0);
 synchronize_rcu();
 kvfree(ptr);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kvfree_call_rcu(struct callback_head *head, void *ptr)
{
 __kvfree_call_rcu(head, ptr);
}


void rcu_qs(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_softirq_qs(void)
{
 rcu_qs();
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_needs_cpu(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_request_urgent_qs_task(struct task_struct *t) { }





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_virt_note_context_switch(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_cpu_stall_reset(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_jiffies_till_stall_check(void) { return 21 * 300; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_irq_exit_check_preempt(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void exit_rcu(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_preempt_deferred_qs(struct task_struct *t) { }
void rcu_scheduler_starting(void);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_end_inkernel_boot(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_inkernel_boot_has_ended(void) { return true; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_is_watching(void) { return true; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_momentary_dyntick_idle(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kfree_rcu_scheduler_running(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_gp_might_be_stalled(void) { return false; }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_all_qs(void) { __asm__ __volatile__("": : :"memory"); }







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcutree_report_cpu_starting(unsigned int cpu) { }
# 287 "../include/linux/rcupdate.h" 2
# 300 "../include/linux/rcupdate.h"
void init_rcu_head(struct callback_head *head);
void destroy_rcu_head(struct callback_head *head);
void init_rcu_head_on_stack(struct callback_head *head);
void destroy_rcu_head_on_stack(struct callback_head *head);
# 314 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_lockdep_current_cpu_online(void) { return true; }


extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_lock_acquire(struct lockdep_map *map)
{
 lock_acquire(map, 0, 0, 2, 0, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_try_lock_acquire(struct lockdep_map *map)
{
 lock_acquire(map, 0, 1, 2, 0, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_lock_release(struct lockdep_map *map)
{
 lock_release(map, ({ __label__ __here; __here: (unsigned long)&&__here; }));
}

int debug_lockdep_rcu_enabled(void);
int rcu_read_lock_held(void);
int rcu_read_lock_bh_held(void);
int rcu_read_lock_sched_held(void);
int rcu_read_lock_any_held(void);
# 402 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_preempt_sleep_check(void)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (lock_is_held(&rcu_lock_map)) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 405, "Illegal context switch in RCU read-side critical section"); } } while (0);

}
# 423 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lockdep_assert_rcu_helper(bool c)
{
 return debug_lockdep_rcu_enabled() &&
        (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) &&
        debug_lockdep_rcu_enabled();
}
# 834 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void rcu_read_lock(void)
{
 __rcu_read_lock();
 (void)0;
 rcu_lock_acquire(&rcu_lock_map);
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 840, "rcu_read_lock() used illegally while idle"); } } while (0);

}
# 865 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_unlock(void)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 868, "rcu_read_unlock() used illegally while idle"); } } while (0);

 rcu_lock_release(&rcu_lock_map);
 (void)0;
 __rcu_read_unlock();
}
# 888 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_lock_bh(void)
{
 local_bh_disable();
 (void)0;
 rcu_lock_acquire(&rcu_bh_lock_map);
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 894, "rcu_read_lock_bh() used illegally while idle"); } } while (0);

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_unlock_bh(void)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 905, "rcu_read_unlock_bh() used illegally while idle"); } } while (0);

 rcu_lock_release(&rcu_bh_lock_map);
 (void)0;
 local_bh_enable();
}
# 926 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_lock_sched(void)
{
 __asm__ __volatile__("": : :"memory");
 (void)0;
 rcu_lock_acquire(&rcu_sched_lock_map);
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 932, "rcu_read_lock_sched() used illegally while idle"); } } while (0);

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) void rcu_read_lock_sched_notrace(void)
{
 __asm__ __volatile__("": : :"memory");
 (void)0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_unlock_sched(void)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_is_watching()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcupdate.h", 950, "rcu_read_unlock_sched() used illegally while idle"); } } while (0);

 rcu_lock_release(&rcu_sched_lock_map);
 (void)0;
 __asm__ __volatile__("": : :"memory");
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) void rcu_read_unlock_sched_notrace(void)
{
 (void)0;
 __asm__ __volatile__("": : :"memory");
}
# 1117 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_head_init(struct callback_head *rhp)
{
 rhp->func = (rcu_callback_t)~0L;
}
# 1135 "../include/linux/rcupdate.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
rcu_head_after_call_rcu(struct callback_head *rhp, rcu_callback_t f)
{
 rcu_callback_t func = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_70(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(rhp->func) == sizeof(char) || sizeof(rhp->func) == sizeof(short) || sizeof(rhp->func) == sizeof(int) || sizeof(rhp->func) == sizeof(long)) || sizeof(rhp->func) == sizeof(long long))) __compiletime_assert_70(); } while (0); (*(const volatile typeof( _Generic((rhp->func), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (rhp->func))) *)&(rhp->func)); });

 if (func == f)
  return true;
 ({ bool __ret_do_once = !!(func != (rcu_callback_t)~0L); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rcupdate.h", 1142, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return false;
}


extern int rcu_expedited;
extern int rcu_normal;

typedef struct { void *lock; ; } class_rcu_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rcu_destructor(class_rcu_t *_T) { if (_T->lock) { rcu_read_unlock(); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_rcu_lock_ptr(class_rcu_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_rcu_t class_rcu_constructor(void) { class_rcu_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; do { rcu_read_lock(); (void)0; } while (0); return _t; }
# 25 "../include/linux/rbtree.h" 2
# 39 "../include/linux/rbtree.h"
extern void rb_insert_color(struct rb_node *, struct rb_root *);
extern void rb_erase(struct rb_node *, struct rb_root *);



extern struct rb_node *rb_next(const struct rb_node *);
extern struct rb_node *rb_prev(const struct rb_node *);
extern struct rb_node *rb_first(const struct rb_root *);
extern struct rb_node *rb_last(const struct rb_root *);


extern struct rb_node *rb_first_postorder(const struct rb_root *);
extern struct rb_node *rb_next_postorder(const struct rb_node *);


extern void rb_replace_node(struct rb_node *victim, struct rb_node *new,
       struct rb_root *root);
extern void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new,
    struct rb_root *root);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rb_link_node(struct rb_node *node, struct rb_node *parent,
    struct rb_node **rb_link)
{
 node->__rb_parent_color = (unsigned long)parent;
 node->rb_left = node->rb_right = ((void *)0);

 *rb_link = node;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rb_link_node_rcu(struct rb_node *node, struct rb_node *parent,
        struct rb_node **rb_link)
{
 node->__rb_parent_color = (unsigned long)parent;
 node->rb_left = node->rb_right = ((void *)0);

 do { uintptr_t _r_a_p__v = (uintptr_t)(node); ; if (__builtin_constant_p(node) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_71(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((*rb_link)) == sizeof(char) || sizeof((*rb_link)) == sizeof(short) || sizeof((*rb_link)) == sizeof(int) || sizeof((*rb_link)) == sizeof(long)) || sizeof((*rb_link)) == sizeof(long long))) __compiletime_assert_71(); } while (0); do { *(volatile typeof((*rb_link)) *)&((*rb_link)) = ((typeof(*rb_link))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_72(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&*rb_link) == sizeof(char) || sizeof(*&*rb_link) == sizeof(short) || sizeof(*&*rb_link) == sizeof(int) || sizeof(*&*rb_link) == sizeof(long)) || sizeof(*&*rb_link) == sizeof(long long))) __compiletime_assert_72(); } while (0); do { *(volatile typeof(*&*rb_link) *)&(*&*rb_link) = ((typeof(*((typeof(*rb_link))_r_a_p__v)) *)((typeof(*rb_link))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
}
# 108 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rb_insert_color_cached(struct rb_node *node,
       struct rb_root_cached *root,
       bool leftmost)
{
 if (leftmost)
  root->rb_leftmost = node;
 rb_insert_color(node, &root->rb_root);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rb_node *
rb_erase_cached(struct rb_node *node, struct rb_root_cached *root)
{
 struct rb_node *leftmost = ((void *)0);

 if (root->rb_leftmost == node)
  leftmost = root->rb_leftmost = rb_next(node);

 rb_erase(node, &root->rb_root);

 return leftmost;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rb_replace_node_cached(struct rb_node *victim,
       struct rb_node *new,
       struct rb_root_cached *root)
{
 if (root->rb_leftmost == victim)
  root->rb_leftmost = new;
 rb_replace_node(victim, new, &root->rb_root);
}
# 164 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct rb_node *
rb_add_cached(struct rb_node *node, struct rb_root_cached *tree,
       bool (*less)(struct rb_node *, const struct rb_node *))
{
 struct rb_node **link = &tree->rb_root.rb_node;
 struct rb_node *parent = ((void *)0);
 bool leftmost = true;

 while (*link) {
  parent = *link;
  if (less(node, parent)) {
   link = &parent->rb_left;
  } else {
   link = &parent->rb_right;
   leftmost = false;
  }
 }

 rb_link_node(node, parent, link);
 rb_insert_color_cached(node, tree, leftmost);

 return leftmost ? node : ((void *)0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
rb_add(struct rb_node *node, struct rb_root *tree,
       bool (*less)(struct rb_node *, const struct rb_node *))
{
 struct rb_node **link = &tree->rb_node;
 struct rb_node *parent = ((void *)0);

 while (*link) {
  parent = *link;
  if (less(node, parent))
   link = &parent->rb_left;
  else
   link = &parent->rb_right;
 }

 rb_link_node(node, parent, link);
 rb_insert_color(node, tree);
}
# 222 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct rb_node *
rb_find_add(struct rb_node *node, struct rb_root *tree,
     int (*cmp)(struct rb_node *, const struct rb_node *))
{
 struct rb_node **link = &tree->rb_node;
 struct rb_node *parent = ((void *)0);
 int c;

 while (*link) {
  parent = *link;
  c = cmp(node, parent);

  if (c < 0)
   link = &parent->rb_left;
  else if (c > 0)
   link = &parent->rb_right;
  else
   return parent;
 }

 rb_link_node(node, parent, link);
 rb_insert_color(node, tree);
 return ((void *)0);
}
# 255 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct rb_node *
rb_find(const void *key, const struct rb_root *tree,
 int (*cmp)(const void *key, const struct rb_node *))
{
 struct rb_node *node = tree->rb_node;

 while (node) {
  int c = cmp(key, node);

  if (c < 0)
   node = node->rb_left;
  else if (c > 0)
   node = node->rb_right;
  else
   return node;
 }

 return ((void *)0);
}
# 283 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct rb_node *
rb_find_first(const void *key, const struct rb_root *tree,
       int (*cmp)(const void *key, const struct rb_node *))
{
 struct rb_node *node = tree->rb_node;
 struct rb_node *match = ((void *)0);

 while (node) {
  int c = cmp(key, node);

  if (c <= 0) {
   if (!c)
    match = node;
   node = node->rb_left;
  } else if (c > 0) {
   node = node->rb_right;
  }
 }

 return match;
}
# 313 "../include/linux/rbtree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct rb_node *
rb_next_match(const void *key, struct rb_node *node,
       int (*cmp)(const void *key, const struct rb_node *))
{
 node = rb_next(node);
 if (node && cmp(key, node))
  node = ((void *)0);
 return node;
}
# 12 "../include/linux/mm_types.h" 2
# 1 "../include/linux/maple_tree.h" 1
# 77 "../include/linux/maple_tree.h"
struct maple_metadata {
 unsigned char end;
 unsigned char gap;
};
# 103 "../include/linux/maple_tree.h"
struct maple_range_64 {
 struct maple_pnode *parent;
 unsigned long pivot[32 - 1];
 union {
  void *slot[32];
  struct {
   void *pad[32 - 1];
   struct maple_metadata meta;
  };
 };
};
# 124 "../include/linux/maple_tree.h"
struct maple_arange_64 {
 struct maple_pnode *parent;
 unsigned long pivot[21 - 1];
 void *slot[21];
 unsigned long gap[21];
 struct maple_metadata meta;
};

struct maple_alloc {
 unsigned long total;
 unsigned char node_count;
 unsigned int request_count;
 struct maple_alloc *slot[(63 - 2)];
};

struct maple_topiary {
 struct maple_pnode *parent;
 struct maple_enode *next;
};

enum maple_type {
 maple_dense,
 maple_leaf_64,
 maple_range_64,
 maple_arange_64,
};
# 185 "../include/linux/maple_tree.h"
typedef struct lockdep_map *lockdep_map_p;
# 219 "../include/linux/maple_tree.h"
struct maple_tree {
 union {
  spinlock_t ma_lock;
  lockdep_map_p ma_external_lock;
 };
 unsigned int ma_flags;
 void *ma_root;
};
# 280 "../include/linux/maple_tree.h"
struct maple_node {
 union {
  struct {
   struct maple_pnode *parent;
   void *slot[63];
  };
  struct {
   void *pad;
   struct callback_head rcu;
   struct maple_enode *piv_parent;
   unsigned char parent_slot;
   enum maple_type type;
   unsigned char slot_len;
   unsigned int ma_flags;
  };
  struct maple_range_64 mr64;
  struct maple_arange_64 ma64;
  struct maple_alloc alloc;
 };
};
# 308 "../include/linux/maple_tree.h"
struct ma_topiary {
 struct maple_enode *head;
 struct maple_enode *tail;
 struct maple_tree *mtree;
};

void *mtree_load(struct maple_tree *mt, unsigned long index);

int mtree_insert(struct maple_tree *mt, unsigned long index,
  void *entry, gfp_t gfp);
int mtree_insert_range(struct maple_tree *mt, unsigned long first,
  unsigned long last, void *entry, gfp_t gfp);
int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
  void *entry, unsigned long size, unsigned long min,
  unsigned long max, gfp_t gfp);
int mtree_alloc_cyclic(struct maple_tree *mt, unsigned long *startp,
  void *entry, unsigned long range_lo, unsigned long range_hi,
  unsigned long *next, gfp_t gfp);
int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
  void *entry, unsigned long size, unsigned long min,
  unsigned long max, gfp_t gfp);

int mtree_store_range(struct maple_tree *mt, unsigned long first,
        unsigned long last, void *entry, gfp_t gfp);
int mtree_store(struct maple_tree *mt, unsigned long index,
  void *entry, gfp_t gfp);
void *mtree_erase(struct maple_tree *mt, unsigned long index);

int mtree_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp);
int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp);

void mtree_destroy(struct maple_tree *mt);
void __mt_destroy(struct maple_tree *mt);
# 349 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mtree_empty(const struct maple_tree *mt)
{
 return mt->ma_root == ((void *)0);
}
# 375 "../include/linux/maple_tree.h"
enum maple_status {
 ma_active,
 ma_start,
 ma_root,
 ma_none,
 ma_pause,
 ma_overflow,
 ma_underflow,
 ma_error,
};
# 426 "../include/linux/maple_tree.h"
struct ma_state {
 struct maple_tree *tree;
 unsigned long index;
 unsigned long last;
 struct maple_enode *node;
 unsigned long min;
 unsigned long max;
 struct maple_alloc *alloc;
 enum maple_status status;
 unsigned char depth;
 unsigned char offset;
 unsigned char mas_flags;
 unsigned char end;
};

struct ma_wr_state {
 struct ma_state *mas;
 struct maple_node *node;
 unsigned long r_min;
 unsigned long r_max;
 enum maple_type type;
 unsigned char offset_end;
 unsigned long *pivots;
 unsigned long end_piv;
 void **slots;
 void *entry;
 void *content;
};
# 496 "../include/linux/maple_tree.h"
void *mas_walk(struct ma_state *mas);
void *mas_store(struct ma_state *mas, void *entry);
void *mas_erase(struct ma_state *mas);
int mas_store_gfp(struct ma_state *mas, void *entry, gfp_t gfp);
void mas_store_prealloc(struct ma_state *mas, void *entry);
void *mas_find(struct ma_state *mas, unsigned long max);
void *mas_find_range(struct ma_state *mas, unsigned long max);
void *mas_find_rev(struct ma_state *mas, unsigned long min);
void *mas_find_range_rev(struct ma_state *mas, unsigned long max);
int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp);
int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
  void *entry, unsigned long range_lo, unsigned long range_hi,
  unsigned long *next, gfp_t gfp);

bool mas_nomem(struct ma_state *mas, gfp_t gfp);
void mas_pause(struct ma_state *mas);
void maple_tree_init(void);
void mas_destroy(struct ma_state *mas);
int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);

void *mas_prev(struct ma_state *mas, unsigned long min);
void *mas_prev_range(struct ma_state *mas, unsigned long max);
void *mas_next(struct ma_state *mas, unsigned long max);
void *mas_next_range(struct ma_state *mas, unsigned long max);

int mas_empty_area(struct ma_state *mas, unsigned long min, unsigned long max,
     unsigned long size);




int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
         unsigned long max, unsigned long size);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mas_init(struct ma_state *mas, struct maple_tree *tree,
       unsigned long addr)
{
 memset(mas, 0, sizeof(struct ma_state));
 mas->tree = tree;
 mas->index = mas->last = addr;
 mas->max = (~0UL);
 mas->status = ma_start;
 mas->node = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mas_is_active(struct ma_state *mas)
{
 return mas->status == ma_active;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mas_is_err(struct ma_state *mas)
{
 return mas->status == ma_error;
}
# 561 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void mas_reset(struct ma_state *mas)
{
 mas->status = ma_start;
 mas->node = ((void *)0);
}
# 715 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mas_set_range(struct ma_state *mas, unsigned long start,
  unsigned long last)
{

 ({ int __ret_warn_on = !!(mas_is_active(mas) && (mas->index > start || mas->last < start)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/maple_tree.h", 720, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });

 mas->index = start;
 mas->last = last;
}
# 735 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void mas_set_range(struct ma_state *mas, unsigned long start, unsigned long last)
{
 mas_reset(mas);
 __mas_set_range(mas, start, last);
}
# 751 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mas_set(struct ma_state *mas, unsigned long index)
{

 mas_set_range(mas, index, index);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mt_external_lock(const struct maple_tree *mt)
{
 return (mt->ma_flags & 0x300) == 0x300;
}
# 772 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mt_init_flags(struct maple_tree *mt, unsigned int flags)
{
 mt->ma_flags = flags;
 if (!mt_external_lock(mt))
  do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&mt->ma_lock), "&mt->ma_lock", &__key, LD_WAIT_CONFIG); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(((void *)0)); ; if (__builtin_constant_p(((void *)0)) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_73(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((mt->ma_root)) == sizeof(char) || sizeof((mt->ma_root)) == sizeof(short) || sizeof((mt->ma_root)) == sizeof(int) || sizeof((mt->ma_root)) == sizeof(long)) || sizeof((mt->ma_root)) == sizeof(long long))) __compiletime_assert_73(); } while (0); do { *(volatile typeof((mt->ma_root)) *)&((mt->ma_root)) = ((typeof(mt->ma_root))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_74(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&mt->ma_root) == sizeof(char) || sizeof(*&mt->ma_root) == sizeof(short) || sizeof(*&mt->ma_root) == sizeof(int) || sizeof(*&mt->ma_root) == sizeof(long)) || sizeof(*&mt->ma_root) == sizeof(long long))) __compiletime_assert_74(); } while (0); do { *(volatile typeof(*&mt->ma_root) *)&(*&mt->ma_root) = ((typeof(*((typeof(mt->ma_root))_r_a_p__v)) *)((typeof(mt->ma_root))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
}
# 788 "../include/linux/maple_tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mt_init(struct maple_tree *mt)
{
 mt_init_flags(mt, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mt_in_rcu(struct maple_tree *mt)
{



 return mt->ma_flags & 0x02;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mt_clear_in_rcu(struct maple_tree *mt)
{
 if (!mt_in_rcu(mt))
  return;

 if (mt_external_lock(mt)) {
  ({ int __ret_warn_on = !!(!(!(mt)->ma_external_lock || lock_is_held((mt)->ma_external_lock))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/maple_tree.h", 811, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  mt->ma_flags &= ~0x02;
 } else {
  spin_lock((&(mt)->ma_lock));
  mt->ma_flags &= ~0x02;
  spin_unlock((&(mt)->ma_lock));
 }
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mt_set_in_rcu(struct maple_tree *mt)
{
 if (mt_in_rcu(mt))
  return;

 if (mt_external_lock(mt)) {
  ({ int __ret_warn_on = !!(!(!(mt)->ma_external_lock || lock_is_held((mt)->ma_external_lock))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/maple_tree.h", 830, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  mt->ma_flags |= 0x02;
 } else {
  spin_lock((&(mt)->ma_lock));
  mt->ma_flags |= 0x02;
  spin_unlock((&(mt)->ma_lock));
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int mt_height(const struct maple_tree *mt)
{
 return (mt->ma_flags & 0x7C) >> 0x02;
}

void *mt_find(struct maple_tree *mt, unsigned long *index, unsigned long max);
void *mt_find_after(struct maple_tree *mt, unsigned long *index,
      unsigned long max);
void *mt_prev(struct maple_tree *mt, unsigned long index, unsigned long min);
void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max);
# 13 "../include/linux/mm_types.h" 2
# 1 "../include/linux/rwsem.h" 1
# 48 "../include/linux/rwsem.h"
struct rw_semaphore {
 atomic_long_t count;





 atomic_long_t owner;



 raw_spinlock_t wait_lock;
 struct list_head wait_list;

 void *magic;


 struct lockdep_map dep_map;

};





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rwsem_is_locked(struct rw_semaphore *sem)
{
 return atomic_long_read(&sem->count) != 0UL;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
{
 ({ int __ret_warn_on = !!(atomic_long_read(&sem->count) == 0UL); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rwsem.h", 80, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
{
 ({ int __ret_warn_on = !!(!(atomic_long_read(&sem->count) & (1UL << 0))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rwsem.h", 85, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}
# 114 "../include/linux/rwsem.h"
extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
    struct lock_class_key *key);
# 130 "../include/linux/rwsem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rwsem_is_contended(struct rw_semaphore *sem)
{
 return !list_empty(&sem->wait_list);
}
# 192 "../include/linux/rwsem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rwsem_assert_held(const struct rw_semaphore *sem)
{
 if (1)
  do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(sem)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rwsem.h", 195, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);
 else
  rwsem_assert_held_nolockdep(sem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rwsem_assert_held_write(const struct rw_semaphore *sem)
{
 if (1)
  do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held_type(&(sem)->dep_map, (0)))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rwsem.h", 203, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);
 else
  rwsem_assert_held_write_nolockdep(sem);
}




extern void down_read(struct rw_semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_read_interruptible(struct rw_semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_read_killable(struct rw_semaphore *sem);




extern int down_read_trylock(struct rw_semaphore *sem);




extern void down_write(struct rw_semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_write_killable(struct rw_semaphore *sem);




extern int down_write_trylock(struct rw_semaphore *sem);




extern void up_read(struct rw_semaphore *sem);




extern void up_write(struct rw_semaphore *sem);

typedef struct rw_semaphore * class_rwsem_read_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rwsem_read_destructor(struct rw_semaphore * *p) { struct rw_semaphore * _T = *p; if (_T) { up_read(_T); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rw_semaphore * class_rwsem_read_constructor(struct rw_semaphore * _T) { struct rw_semaphore * t = ({ down_read(_T); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_rwsem_read_lock_ptr(class_rwsem_read_t *_T) { return *_T; }
typedef class_rwsem_read_t class_rwsem_read_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rwsem_read_try_destructor(class_rwsem_read_t *p){ class_rwsem_read_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_rwsem_read_t class_rwsem_read_try_constructor(class_rwsem_read_t _T) { class_rwsem_read_t t = ({ void *_t = _T; if (_T && !(down_read_trylock(_T))) _t = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_rwsem_read_try_lock_ptr(class_rwsem_read_t *_T) { return class_rwsem_read_lock_ptr(_T); }
typedef class_rwsem_read_t class_rwsem_read_intr_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rwsem_read_intr_destructor(class_rwsem_read_t *p){ class_rwsem_read_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_rwsem_read_t class_rwsem_read_intr_constructor(class_rwsem_read_t _T) { class_rwsem_read_t t = ({ void *_t = _T; if (_T && !(down_read_interruptible(_T) == 0)) _t = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_rwsem_read_intr_lock_ptr(class_rwsem_read_t *_T) { return class_rwsem_read_lock_ptr(_T); }

typedef struct rw_semaphore * class_rwsem_write_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rwsem_write_destructor(struct rw_semaphore * *p) { struct rw_semaphore * _T = *p; if (_T) { up_write(_T); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rw_semaphore * class_rwsem_write_constructor(struct rw_semaphore * _T) { struct rw_semaphore * t = ({ down_write(_T); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_rwsem_write_lock_ptr(class_rwsem_write_t *_T) { return *_T; }
typedef class_rwsem_write_t class_rwsem_write_try_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rwsem_write_try_destructor(class_rwsem_write_t *p){ class_rwsem_write_destructor(p); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_rwsem_write_t class_rwsem_write_try_constructor(class_rwsem_write_t _T) { class_rwsem_write_t t = ({ void *_t = _T; if (_T && !(down_write_trylock(_T))) _t = ((void *)0); _t; }); return t; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_rwsem_write_try_lock_ptr(class_rwsem_write_t *_T) { return class_rwsem_write_lock_ptr(_T); }




extern void downgrade_write(struct rw_semaphore *sem);
# 267 "../include/linux/rwsem.h"
extern void down_read_nested(struct rw_semaphore *sem, int subclass);
extern int __attribute__((__warn_unused_result__)) down_read_killable_nested(struct rw_semaphore *sem, int subclass);
extern void down_write_nested(struct rw_semaphore *sem, int subclass);
extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass);
extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock);
# 285 "../include/linux/rwsem.h"
extern void down_read_non_owner(struct rw_semaphore *sem);
extern void up_read_non_owner(struct rw_semaphore *sem);
# 14 "../include/linux/mm_types.h" 2
# 1 "../include/linux/completion.h" 1
# 12 "../include/linux/completion.h"
# 1 "../include/linux/swait.h" 1








# 1 "./arch/hexagon/include/generated/asm/current.h" 1
# 10 "../include/linux/swait.h" 2
# 41 "../include/linux/swait.h"
struct task_struct;

struct swait_queue_head {
 raw_spinlock_t lock;
 struct list_head task_list;
};

struct swait_queue {
 struct task_struct *task;
 struct list_head task_list;
};
# 69 "../include/linux/swait.h"
extern void __init_swait_queue_head(struct swait_queue_head *q, const char *name,
        struct lock_class_key *key);
# 121 "../include/linux/swait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int swait_active(struct swait_queue_head *wq)
{
 return !list_empty(&wq->task_list);
}
# 134 "../include/linux/swait.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool swq_has_sleeper(struct swait_queue_head *wq)
{







 __asm__ __volatile__("": : :"memory");
 return swait_active(wq);
}

extern void swake_up_one(struct swait_queue_head *q);
extern void swake_up_all(struct swait_queue_head *q);
extern void swake_up_locked(struct swait_queue_head *q, int wake_flags);

extern void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue *wait, int state);
extern long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait, int state);

extern void __finish_swait(struct swait_queue_head *q, struct swait_queue *wait);
extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait);
# 13 "../include/linux/completion.h" 2
# 26 "../include/linux/completion.h"
struct completion {
 unsigned int done;
 struct swait_queue_head wait;
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void complete_acquire(struct completion *x) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void complete_release(struct completion *x) {}
# 84 "../include/linux/completion.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_completion(struct completion *x)
{
 x->done = 0;
 do { static struct lock_class_key __key; __init_swait_queue_head((&x->wait), "&x->wait", &__key); } while (0);
}
# 97 "../include/linux/completion.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reinit_completion(struct completion *x)
{
 x->done = 0;
}

extern void wait_for_completion(struct completion *);
extern void wait_for_completion_io(struct completion *);
extern int wait_for_completion_interruptible(struct completion *x);
extern int wait_for_completion_killable(struct completion *x);
extern int wait_for_completion_state(struct completion *x, unsigned int state);
extern unsigned long wait_for_completion_timeout(struct completion *x,
         unsigned long timeout);
extern unsigned long wait_for_completion_io_timeout(struct completion *x,
          unsigned long timeout);
extern long wait_for_completion_interruptible_timeout(
 struct completion *x, unsigned long timeout);
extern long wait_for_completion_killable_timeout(
 struct completion *x, unsigned long timeout);
extern bool try_wait_for_completion(struct completion *x);
extern bool completion_done(struct completion *x);

extern void complete(struct completion *);
extern void complete_on_current_cpu(struct completion *x);
extern void complete_all(struct completion *);
# 15 "../include/linux/mm_types.h" 2

# 1 "../include/linux/uprobes.h" 1
# 19 "../include/linux/uprobes.h"
struct vm_area_struct;
struct mm_struct;
struct inode;
struct notifier_block;
struct page;






enum uprobe_filter_ctx {
 UPROBE_FILTER_REGISTER,
 UPROBE_FILTER_UNREGISTER,
 UPROBE_FILTER_MMAP,
};

struct uprobe_consumer {
 int (*handler)(struct uprobe_consumer *self, struct pt_regs *regs);
 int (*ret_handler)(struct uprobe_consumer *self,
    unsigned long func,
    struct pt_regs *regs);
 bool (*filter)(struct uprobe_consumer *self,
    enum uprobe_filter_ctx ctx,
    struct mm_struct *mm);

 struct uprobe_consumer *next;
};
# 145 "../include/linux/uprobes.h"
struct uprobes_state {
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobes_init(void)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *uc)
{
 return -38;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int uprobe_register_refctr(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc)
{
 return -38;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uprobe_apply(struct inode *inode, loff_t offset, struct uprobe_consumer *uc, bool add)
{
 return -38;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
uprobe_unregister(struct inode *inode, loff_t offset, struct uprobe_consumer *uc)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int uprobe_mmap(struct vm_area_struct *vma)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_start_dup_mmap(void)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_end_dup_mmap(void)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_notify_resume(struct pt_regs *regs)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uprobe_deny_signal(void)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_free_utask(struct task_struct *t)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_copy_process(struct task_struct *t, unsigned long flags)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uprobe_clear_state(struct mm_struct *mm)
{
}
# 17 "../include/linux/mm_types.h" 2


# 1 "../include/linux/workqueue.h" 1








# 1 "../include/linux/timer.h" 1





# 1 "../include/linux/ktime.h" 1
# 24 "../include/linux/ktime.h"
# 1 "./arch/hexagon/include/generated/asm/bug.h" 1
# 25 "../include/linux/ktime.h" 2
# 1 "../include/linux/jiffies.h" 1






# 1 "../include/linux/math64.h" 1






# 1 "./arch/hexagon/include/generated/asm/div64.h" 1
# 8 "../include/linux/math64.h" 2
# 1 "../include/vdso/math64.h" 1




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32
__iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder)
{
 u32 ret = 0;

 while (dividend >= divisor) {


  asm("" : "+rm"(dividend));

  dividend -= divisor;
  ret++;
 }

 *remainder = dividend;

 return ret;
}
# 37 "../include/vdso/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 mul_u32_u32(u32 a, u32 b)
{
 return (u64)a * b;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u64 mul_u64_u32_add_u64_shr(u64 a, u32 mul, u64 b, unsigned int shift)
{
 u32 ah = a >> 32, al = a;
 bool ovf;
 u64 ret;

 ovf = __builtin_add_overflow(mul_u32_u32(al, mul), b, &ret);
 ret >>= shift;
 if (ovf && shift)
  ret += 1ULL << (64 - shift);
 if (ah)
  ret += mul_u32_u32(ah, mul) << (32 - shift);

 return ret;
}
# 9 "../include/linux/math64.h" 2
# 90 "../include/linux/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder)
{
 *remainder = ({ uint32_t __base = (divisor); uint32_t __rem; (void)(((typeof((dividend)) *)0) == ((uint64_t *)0)); if (__builtin_constant_p(__base) && is_power_of_2(__base)) { __rem = (dividend) & (__base - 1); (dividend) >>= ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); } else if (__builtin_constant_p(__base) && __base != 0) { uint32_t __res_lo, __n_lo = (dividend); (dividend) = ({ uint64_t ___res, ___x, ___t, ___m, ___n = (dividend); uint32_t ___p, ___bias; ___p = 1 << ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); ___m = (~0ULL / __base) * ___p; ___m += (((~0ULL % __base + 1) * ___p) + __base - 1) / __base; ___x = ~0ULL / __base * __base - 1; ___res = ((___m & 0xffffffff) * (___x & 0xffffffff)) >> 32; ___t = ___res += (___m & 0xffffffff) * (___x >> 32); ___res += (___x & 0xffffffff) * (___m >> 32); ___t = (___res < ___t) ? (1ULL << 32) : 0; ___res = (___res >> 32) + ___t; ___res += (___m >> 32) * (___x >> 32); ___res /= ___p; if (~0ULL % (__base / (__base & -__base)) == 0) { ___n /= (__base & -__base); ___m = ~0ULL / (__base / (__base & -__base)); ___p = 1; ___bias = 1; } else if (___res != ___x / __base) { ___bias = 1; ___m = (~0ULL / __base) * ___p; ___m += ((~0ULL % __base + 1) * ___p) / __base; } else { uint32_t ___bits = -(___m & -___m); ___bits |= ___m >> 32; ___bits = (~___bits) << 1; if (!___bits) { ___p /= (___m & -___m); ___m /= (___m & -___m); } else { ___p >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); ___m >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); } ___bias = 0; } ___res = __arch_xprod_64(___m, ___n, ___bias); ___res /= ___p; }); __res_lo = (dividend); __rem = __n_lo - __res_lo * __base; } else if (__builtin_expect(!!(((dividend) >> 32) == 0), 1)) { __rem = (uint32_t)(dividend) % __base; (dividend) = (uint32_t)(dividend) / __base; } else { __rem = __div64_32(&(dividend), __base); } __rem; });
 return dividend;
}



extern s64 div_s64_rem(s64 dividend, s32 divisor, s32 *remainder);



extern u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder);



extern u64 div64_u64(u64 dividend, u64 divisor);



extern s64 div64_s64(s64 dividend, s64 divisor);
# 127 "../include/linux/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 div_u64(u64 dividend, u32 divisor)
{
 u32 remainder;
 return div_u64_rem(dividend, divisor, &remainder);
}
# 142 "../include/linux/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 div_s64(s64 dividend, s32 divisor)
{
 s32 remainder;
 return div_s64_rem(dividend, divisor, &remainder);
}


u32 iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder);
# 180 "../include/linux/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift)
{
 u32 ah = a >> 32, al = a;
 u64 ret;

 ret = mul_u32_u32(al, mul) >> shift;
 if (ah)
  ret += mul_u32_u32(ah, mul) << (32 - shift);
 return ret;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift)
{
 union {
  u64 ll;
  struct {



   u32 low, high;

  } l;
 } rl, rm, rn, rh, a0, b0;
 u64 c;

 a0.ll = a;
 b0.ll = b;

 rl.ll = mul_u32_u32(a0.l.low, b0.l.low);
 rm.ll = mul_u32_u32(a0.l.low, b0.l.high);
 rn.ll = mul_u32_u32(a0.l.high, b0.l.low);
 rh.ll = mul_u32_u32(a0.l.high, b0.l.high);






 rl.l.high = c = (u64)rl.l.high + rm.l.low + rn.l.low;
 rh.l.low = c = (c >> 32) + rm.l.high + rn.l.high + rh.l.low;
 rh.l.high = (c >> 32) + rh.l.high;





 if (shift == 0)
  return rl.ll;
 if (shift < 64)
  return (rl.ll >> shift) | (rh.ll << (64 - shift));
 return rh.ll >> (shift & 63);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 mul_s64_u64_shr(s64 a, u64 b, unsigned int shift)
{
 u64 ret;





 ret = mul_u64_u64_shr(__builtin_choose_expr( __builtin_types_compatible_p(typeof(a), signed long long) || __builtin_types_compatible_p(typeof(a), unsigned long long), ({ signed long long __x = (a); __x < 0 ? -__x : __x; }), __builtin_choose_expr( __builtin_types_compatible_p(typeof(a), signed long) || __builtin_types_compatible_p(typeof(a), unsigned long), ({ signed long __x = (a); __x < 0 ? -__x : __x; }), __builtin_choose_expr( __builtin_types_compatible_p(typeof(a), signed int) || __builtin_types_compatible_p(typeof(a), unsigned int), ({ signed int __x = (a); __x < 0 ? -__x : __x; }), __builtin_choose_expr( __builtin_types_compatible_p(typeof(a), signed short) || __builtin_types_compatible_p(typeof(a), unsigned short), ({ signed short __x = (a); __x < 0 ? -__x : __x; }), __builtin_choose_expr( __builtin_types_compatible_p(typeof(a), signed char) || __builtin_types_compatible_p(typeof(a), unsigned char), ({ signed char __x = (a); __x < 0 ? -__x : __x; }), __builtin_choose_expr( __builtin_types_compatible_p(typeof(a), char), (char)({ signed char __x = (a); __x<0?-__x:__x; }), ((void)0))))))), b, shift);

 if (a < 0)
  ret = -((s64) ret);

 return ret;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
{
 union {
  u64 ll;
  struct {



   u32 low, high;

  } l;
 } u, rl, rh;

 u.ll = a;
 rl.ll = mul_u32_u32(u.l.low, mul);
 rh.ll = mul_u32_u32(u.l.high, mul) + rl.l.high;


 rl.l.high = ({ uint32_t __base = (divisor); uint32_t __rem; (void)(((typeof((rh.ll)) *)0) == ((uint64_t *)0)); if (__builtin_constant_p(__base) && is_power_of_2(__base)) { __rem = (rh.ll) & (__base - 1); (rh.ll) >>= ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); } else if (__builtin_constant_p(__base) && __base != 0) { uint32_t __res_lo, __n_lo = (rh.ll); (rh.ll) = ({ uint64_t ___res, ___x, ___t, ___m, ___n = (rh.ll); uint32_t ___p, ___bias; ___p = 1 << ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); ___m = (~0ULL / __base) * ___p; ___m += (((~0ULL % __base + 1) * ___p) + __base - 1) / __base; ___x = ~0ULL / __base * __base - 1; ___res = ((___m & 0xffffffff) * (___x & 0xffffffff)) >> 32; ___t = ___res += (___m & 0xffffffff) * (___x >> 32); ___res += (___x & 0xffffffff) * (___m >> 32); ___t = (___res < ___t) ? (1ULL << 32) : 0; ___res = (___res >> 32) + ___t; ___res += (___m >> 32) * (___x >> 32); ___res /= ___p; if (~0ULL % (__base / (__base & -__base)) == 0) { ___n /= (__base & -__base); ___m = ~0ULL / (__base / (__base & -__base)); ___p = 1; ___bias = 1; } else if (___res != ___x / __base) { ___bias = 1; ___m = (~0ULL / __base) * ___p; ___m += ((~0ULL % __base + 1) * ___p) / __base; } else { uint32_t ___bits = -(___m & -___m); ___bits |= ___m >> 32; ___bits = (~___bits) << 1; if (!___bits) { ___p /= (___m & -___m); ___m /= (___m & -___m); } else { ___p >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); ___m >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); } ___bias = 0; } ___res = __arch_xprod_64(___m, ___n, ___bias); ___res /= ___p; }); __res_lo = (rh.ll); __rem = __n_lo - __res_lo * __base; } else if (__builtin_expect(!!(((rh.ll) >> 32) == 0), 1)) { __rem = (uint32_t)(rh.ll) % __base; (rh.ll) = (uint32_t)(rh.ll) / __base; } else { __rem = __div64_32(&(rh.ll), __base); } __rem; });


 ({ uint32_t __base = (divisor); uint32_t __rem; (void)(((typeof((rl.ll)) *)0) == ((uint64_t *)0)); if (__builtin_constant_p(__base) && is_power_of_2(__base)) { __rem = (rl.ll) & (__base - 1); (rl.ll) >>= ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); } else if (__builtin_constant_p(__base) && __base != 0) { uint32_t __res_lo, __n_lo = (rl.ll); (rl.ll) = ({ uint64_t ___res, ___x, ___t, ___m, ___n = (rl.ll); uint32_t ___p, ___bias; ___p = 1 << ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); ___m = (~0ULL / __base) * ___p; ___m += (((~0ULL % __base + 1) * ___p) + __base - 1) / __base; ___x = ~0ULL / __base * __base - 1; ___res = ((___m & 0xffffffff) * (___x & 0xffffffff)) >> 32; ___t = ___res += (___m & 0xffffffff) * (___x >> 32); ___res += (___x & 0xffffffff) * (___m >> 32); ___t = (___res < ___t) ? (1ULL << 32) : 0; ___res = (___res >> 32) + ___t; ___res += (___m >> 32) * (___x >> 32); ___res /= ___p; if (~0ULL % (__base / (__base & -__base)) == 0) { ___n /= (__base & -__base); ___m = ~0ULL / (__base / (__base & -__base)); ___p = 1; ___bias = 1; } else if (___res != ___x / __base) { ___bias = 1; ___m = (~0ULL / __base) * ___p; ___m += ((~0ULL % __base + 1) * ___p) / __base; } else { uint32_t ___bits = -(___m & -___m); ___bits |= ___m >> 32; ___bits = (~___bits) << 1; if (!___bits) { ___p /= (___m & -___m); ___m /= (___m & -___m); } else { ___p >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); ___m >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); } ___bias = 0; } ___res = __arch_xprod_64(___m, ___n, ___bias); ___res /= ___p; }); __res_lo = (rl.ll); __rem = __n_lo - __res_lo * __base; } else if (__builtin_expect(!!(((rl.ll) >> 32) == 0), 1)) { __rem = (uint32_t)(rl.ll) % __base; (rl.ll) = (uint32_t)(rl.ll) / __base; } else { __rem = __div64_32(&(rl.ll), __base); } __rem; });

 rl.l.high = rh.l.low;
 return rl.ll;
}


u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div);
# 369 "../include/linux/math64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 roundup_u64(u64 x, u32 y)
{
 return ({ u32 _tmp = (y); div_u64((x) + _tmp - 1, _tmp); }) * y;
}
# 8 "../include/linux/jiffies.h" 2


# 1 "../include/linux/time.h" 1






# 1 "../include/linux/time64.h" 1





# 1 "../include/vdso/time64.h" 1
# 7 "../include/linux/time64.h" 2

typedef __s64 time64_t;
typedef __u64 timeu64_t;

# 1 "../include/uapi/linux/time.h" 1





# 1 "../include/uapi/linux/time_types.h" 1






struct __kernel_timespec {
 __kernel_time64_t tv_sec;
 long long tv_nsec;
};

struct __kernel_itimerspec {
 struct __kernel_timespec it_interval;
 struct __kernel_timespec it_value;
};
# 25 "../include/uapi/linux/time_types.h"
struct __kernel_old_timeval {
 __kernel_long_t tv_sec;
 __kernel_long_t tv_usec;
};


struct __kernel_old_timespec {
 __kernel_old_time_t tv_sec;
 long tv_nsec;
};

struct __kernel_old_itimerval {
 struct __kernel_old_timeval it_interval;
 struct __kernel_old_timeval it_value;
};

struct __kernel_sock_timeval {
 __s64 tv_sec;
 __s64 tv_usec;
};
# 7 "../include/uapi/linux/time.h" 2
# 33 "../include/uapi/linux/time.h"
struct timezone {
 int tz_minuteswest;
 int tz_dsttime;
};
# 12 "../include/linux/time64.h" 2

struct timespec64 {
 time64_t tv_sec;
 long tv_nsec;
};

struct itimerspec64 {
 struct timespec64 it_interval;
 struct timespec64 it_value;
};
# 46 "../include/linux/time64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int timespec64_equal(const struct timespec64 *a,
       const struct timespec64 *b)
{
 return (a->tv_sec == b->tv_sec) && (a->tv_nsec == b->tv_nsec);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int timespec64_compare(const struct timespec64 *lhs, const struct timespec64 *rhs)
{
 if (lhs->tv_sec < rhs->tv_sec)
  return -1;
 if (lhs->tv_sec > rhs->tv_sec)
  return 1;
 return lhs->tv_nsec - rhs->tv_nsec;
}

extern void set_normalized_timespec64(struct timespec64 *ts, time64_t sec, s64 nsec);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 timespec64_add(struct timespec64 lhs,
      struct timespec64 rhs)
{
 struct timespec64 ts_delta;
 set_normalized_timespec64(&ts_delta, lhs.tv_sec + rhs.tv_sec,
    lhs.tv_nsec + rhs.tv_nsec);
 return ts_delta;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 timespec64_sub(struct timespec64 lhs,
      struct timespec64 rhs)
{
 struct timespec64 ts_delta;
 set_normalized_timespec64(&ts_delta, lhs.tv_sec - rhs.tv_sec,
    lhs.tv_nsec - rhs.tv_nsec);
 return ts_delta;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool timespec64_valid(const struct timespec64 *ts)
{

 if (ts->tv_sec < 0)
  return false;

 if ((unsigned long)ts->tv_nsec >= 1000000000L)
  return false;
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool timespec64_valid_strict(const struct timespec64 *ts)
{
 if (!timespec64_valid(ts))
  return false;

 if ((unsigned long long)ts->tv_sec >= (((s64)~((u64)1 << 63)) / 1000000000L))
  return false;
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool timespec64_valid_settod(const struct timespec64 *ts)
{
 if (!timespec64_valid(ts))
  return false;

 if ((unsigned long long)ts->tv_sec >= ((((s64)~((u64)1 << 63)) / 1000000000L) - (30LL * 365 * 24 *3600)))
  return false;
 return true;
}
# 130 "../include/linux/time64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 timespec64_to_ns(const struct timespec64 *ts)
{

 if (ts->tv_sec >= (((s64)~((u64)1 << 63)) / 1000000000L))
  return ((s64)~((u64)1 << 63));

 if (ts->tv_sec <= ((-((s64)~((u64)1 << 63)) - 1) / 1000000000L))
  return (-((s64)~((u64)1 << 63)) - 1);

 return ((s64) ts->tv_sec * 1000000000L) + ts->tv_nsec;
}







extern struct timespec64 ns_to_timespec64(s64 nsec);
# 158 "../include/linux/time64.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void timespec64_add_ns(struct timespec64 *a, u64 ns)
{
 a->tv_sec += __iter_div_u64_rem(a->tv_nsec + ns, 1000000000L, &ns);
 a->tv_nsec = ns;
}





extern struct timespec64 timespec64_add_safe(const struct timespec64 lhs,
      const struct timespec64 rhs);
# 8 "../include/linux/time.h" 2

extern struct timezone sys_tz;

int get_timespec64(struct timespec64 *ts,
  const struct __kernel_timespec *uts);
int put_timespec64(const struct timespec64 *ts,
  struct __kernel_timespec *uts);
int get_itimerspec64(struct itimerspec64 *it,
   const struct __kernel_itimerspec *uit);
int put_itimerspec64(const struct itimerspec64 *it,
   struct __kernel_itimerspec *uit);

extern time64_t mktime64(const unsigned int year, const unsigned int mon,
   const unsigned int day, const unsigned int hour,
   const unsigned int min, const unsigned int sec);


extern void clear_itimer(void);




extern long do_utimes(int dfd, const char *filename, struct timespec64 *times, int flags);





struct tm {




 int tm_sec;

 int tm_min;

 int tm_hour;

 int tm_mday;

 int tm_mon;

 long tm_year;

 int tm_wday;

 int tm_yday;
};

void time64_to_tm(time64_t totalsecs, int offset, struct tm *result);

# 1 "../include/linux/time32.h" 1
# 13 "../include/linux/time32.h"
# 1 "../include/linux/timex.h" 1
# 56 "../include/linux/timex.h"
# 1 "../include/uapi/linux/timex.h" 1
# 56 "../include/uapi/linux/timex.h"
# 1 "../include/linux/time.h" 1
# 57 "../include/uapi/linux/timex.h" 2
# 97 "../include/uapi/linux/timex.h"
struct __kernel_timex_timeval {
 __kernel_time64_t tv_sec;
 long long tv_usec;
};

struct __kernel_timex {
 unsigned int modes;
 int :32;
 long long offset;
 long long freq;
 long long maxerror;
 long long esterror;
 int status;
 int :32;
 long long constant;
 long long precision;
 long long tolerance;


 struct __kernel_timex_timeval time;
 long long tick;

 long long ppsfreq;
 long long jitter;
 int shift;
 int :32;
 long long stabil;
 long long jitcnt;
 long long calcnt;
 long long errcnt;
 long long stbcnt;

 int tai;

 int :32; int :32; int :32; int :32;
 int :32; int :32; int :32; int :32;
 int :32; int :32; int :32;
};
# 57 "../include/linux/timex.h" 2








unsigned long random_get_entropy_fallback(void);

# 1 "../arch/hexagon/include/asm/timex.h" 1








# 1 "../include/asm-generic/timex.h" 1







typedef unsigned long cycles_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) cycles_t get_cycles(void)
{
 return 0;
}
# 10 "../arch/hexagon/include/asm/timex.h" 2







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int read_current_timer(unsigned long *timer_val)
{
 *timer_val = __vmgettime();
 return 0;
}
# 68 "../include/linux/timex.h" 2
# 147 "../include/linux/timex.h"
extern unsigned long tick_usec;
extern unsigned long tick_nsec;
# 162 "../include/linux/timex.h"
extern int do_adjtimex(struct __kernel_timex *);
extern int do_clock_adjtime(const clockid_t which_clock, struct __kernel_timex * ktx);

extern void hardpps(const struct timespec64 *, const struct timespec64 *);

int read_current_timer(unsigned long *timer_val);
# 14 "../include/linux/time32.h" 2

# 1 "../include/vdso/time32.h" 1




typedef s32 old_time32_t;

struct old_timespec32 {
 old_time32_t tv_sec;
 s32 tv_nsec;
};

struct old_timeval32 {
 old_time32_t tv_sec;
 s32 tv_usec;
};
# 16 "../include/linux/time32.h" 2

struct old_itimerspec32 {
 struct old_timespec32 it_interval;
 struct old_timespec32 it_value;
};

struct old_utimbuf32 {
 old_time32_t actime;
 old_time32_t modtime;
};

struct old_timex32 {
 u32 modes;
 s32 offset;
 s32 freq;
 s32 maxerror;
 s32 esterror;
 s32 status;
 s32 constant;
 s32 precision;
 s32 tolerance;
 struct old_timeval32 time;
 s32 tick;
 s32 ppsfreq;
 s32 jitter;
 s32 shift;
 s32 stabil;
 s32 jitcnt;
 s32 calcnt;
 s32 errcnt;
 s32 stbcnt;
 s32 tai;

 s32:32; s32:32; s32:32; s32:32;
 s32:32; s32:32; s32:32; s32:32;
 s32:32; s32:32; s32:32;
};

extern int get_old_timespec32(struct timespec64 *, const void *);
extern int put_old_timespec32(const struct timespec64 *, void *);
extern int get_old_itimerspec32(struct itimerspec64 *its,
   const struct old_itimerspec32 *uits);
extern int put_old_itimerspec32(const struct itimerspec64 *its,
   struct old_itimerspec32 *uits);
struct __kernel_timex;
int get_old_timex32(struct __kernel_timex *, const struct old_timex32 *);
int put_old_timex32(struct old_timex32 *, const struct __kernel_timex *);







extern struct __kernel_old_timeval ns_to_kernel_old_timeval(s64 nsec);
# 61 "../include/linux/time.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool itimerspec64_valid(const struct itimerspec64 *its)
{
 if (!timespec64_valid(&(its->it_interval)) ||
  !timespec64_valid(&(its->it_value)))
  return false;

 return true;
}
# 100 "../include/linux/time.h"
# 1 "../include/vdso/time.h" 1






struct timens_offset {
 s64 sec;
 u64 nsec;
};
# 101 "../include/linux/time.h" 2
# 11 "../include/linux/jiffies.h" 2

# 1 "../include/vdso/jiffies.h" 1
# 13 "../include/linux/jiffies.h" 2

# 1 "./include/generated/timeconst.h" 1
# 15 "../include/linux/jiffies.h" 2
# 62 "../include/linux/jiffies.h"
extern int register_refined_jiffies(long clock_tick_rate);
# 85 "../include/linux/jiffies.h"
extern u64 jiffies_64;
extern unsigned long volatile jiffies;


u64 get_jiffies_64(void);
# 336 "../include/linux/jiffies.h"
extern unsigned long preset_lpj;
# 437 "../include/linux/jiffies.h"
extern unsigned int jiffies_to_msecs(const unsigned long j);
extern unsigned int jiffies_to_usecs(const unsigned long j);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 jiffies_to_nsecs(const unsigned long j)
{
 return (u64)jiffies_to_usecs(j) * 1000L;
}

extern u64 jiffies64_to_nsecs(u64 j);
extern u64 jiffies64_to_msecs(u64 j);

extern unsigned long __msecs_to_jiffies(const unsigned int m);
# 483 "../include/linux/jiffies.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long _msecs_to_jiffies(const unsigned int m)
{
 if (300 > 1000L && m > jiffies_to_msecs(((((long)(~0UL >> 1)) >> 1)-1)))
  return ((((long)(~0UL >> 1)) >> 1)-1);

 return (0x9999999AULL * m + 0x1CCCCCCCCULL) >> 33;
}
# 518 "../include/linux/jiffies.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long msecs_to_jiffies(const unsigned int m)
{
 if (__builtin_constant_p(m)) {
  if ((int)m < 0)
   return ((((long)(~0UL >> 1)) >> 1)-1);
  return _msecs_to_jiffies(m);
 } else {
  return __msecs_to_jiffies(m);
 }
}

extern unsigned long __usecs_to_jiffies(const unsigned int u);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long _usecs_to_jiffies(const unsigned int u)
{
 return (0x9D495183ULL * u + 0x7FFCB923A29ULL)
  >> 43;
}
# 567 "../include/linux/jiffies.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long usecs_to_jiffies(const unsigned int u)
{
 if (__builtin_constant_p(u)) {
  if (u > jiffies_to_usecs(((((long)(~0UL >> 1)) >> 1)-1)))
   return ((((long)(~0UL >> 1)) >> 1)-1);
  return _usecs_to_jiffies(u);
 } else {
  return __usecs_to_jiffies(u);
 }
}

extern unsigned long timespec64_to_jiffies(const struct timespec64 *value);
extern void jiffies_to_timespec64(const unsigned long jiffies,
      struct timespec64 *value);
extern clock_t jiffies_to_clock_t(unsigned long x);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) clock_t jiffies_delta_to_clock_t(long delta)
{
 return jiffies_to_clock_t(__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((0L) - (delta)) * 0l)) : (int *)8))), ((0L) > (delta) ? (0L) : (delta)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0L))(-1)) < ( typeof(0L))1)) * 0l)) : (int *)8))), (((typeof(0L))(-1)) < ( typeof(0L))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(delta))(-1)) < ( typeof(delta))1)) * 0l)) : (int *)8))), (((typeof(delta))(-1)) < ( typeof(delta))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((0L) + 0))(-1)) < ( typeof((0L) + 0))1)) * 0l)) : (int *)8))), (((typeof((0L) + 0))(-1)) < ( typeof((0L) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((delta) + 0))(-1)) < ( typeof((delta) + 0))1)) * 0l)) : (int *)8))), (((typeof((delta) + 0))(-1)) < ( typeof((delta) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(0L) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0L))(-1)) < ( typeof(0L))1)) * 0l)) : (int *)8))), (((typeof(0L))(-1)) < ( typeof(0L))1), 0), 0L, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(delta) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(delta))(-1)) < ( typeof(delta))1)) * 0l)) : (int *)8))), (((typeof(delta))(-1)) < ( typeof(delta))1), 0), delta, -1) >= 0)), "max" "(" "0L" ", " "delta" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_75 = (0L); __auto_type __UNIQUE_ID_y_76 = (delta); ((__UNIQUE_ID_x_75) > (__UNIQUE_ID_y_76) ? (__UNIQUE_ID_x_75) : (__UNIQUE_ID_y_76)); }); })));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int jiffies_delta_to_msecs(long delta)
{
 return jiffies_to_msecs(__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((0L) - (delta)) * 0l)) : (int *)8))), ((0L) > (delta) ? (0L) : (delta)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0L))(-1)) < ( typeof(0L))1)) * 0l)) : (int *)8))), (((typeof(0L))(-1)) < ( typeof(0L))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(delta))(-1)) < ( typeof(delta))1)) * 0l)) : (int *)8))), (((typeof(delta))(-1)) < ( typeof(delta))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((0L) + 0))(-1)) < ( typeof((0L) + 0))1)) * 0l)) : (int *)8))), (((typeof((0L) + 0))(-1)) < ( typeof((0L) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((delta) + 0))(-1)) < ( typeof((delta) + 0))1)) * 0l)) : (int *)8))), (((typeof((delta) + 0))(-1)) < ( typeof((delta) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(0L) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0L))(-1)) < ( typeof(0L))1)) * 0l)) : (int *)8))), (((typeof(0L))(-1)) < ( typeof(0L))1), 0), 0L, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(delta) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(delta))(-1)) < ( typeof(delta))1)) * 0l)) : (int *)8))), (((typeof(delta))(-1)) < ( typeof(delta))1), 0), delta, -1) >= 0)), "max" "(" "0L" ", " "delta" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_77 = (0L); __auto_type __UNIQUE_ID_y_78 = (delta); ((__UNIQUE_ID_x_77) > (__UNIQUE_ID_y_78) ? (__UNIQUE_ID_x_77) : (__UNIQUE_ID_y_78)); }); })));
}

extern unsigned long clock_t_to_jiffies(unsigned long x);
extern u64 jiffies_64_to_clock_t(u64 x);
extern u64 nsec_to_clock_t(u64 x);
extern u64 nsecs_to_jiffies64(u64 n);
extern unsigned long nsecs_to_jiffies(u64 n);
# 26 "../include/linux/ktime.h" 2
# 36 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_set(const s64 secs, const unsigned long nsecs)
{
 if (__builtin_expect(!!(secs >= (((s64)~((u64)1 << 63)) / 1000000000L)), 0))
  return ((s64)~((u64)1 << 63));

 return secs * 1000000000L + (s64)nsecs;
}
# 69 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t timespec64_to_ktime(struct timespec64 ts)
{
 return ktime_set(ts.tv_sec, ts.tv_nsec);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_to_ns(const ktime_t kt)
{
 return kt;
}
# 93 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ktime_compare(const ktime_t cmp1, const ktime_t cmp2)
{
 if (cmp1 < cmp2)
  return -1;
 if (cmp1 > cmp2)
  return 1;
 return 0;
}
# 109 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ktime_after(const ktime_t cmp1, const ktime_t cmp2)
{
 return ktime_compare(cmp1, cmp2) > 0;
}
# 121 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ktime_before(const ktime_t cmp1, const ktime_t cmp2)
{
 return ktime_compare(cmp1, cmp2) < 0;
}


extern s64 __ktime_divns(const ktime_t kt, s64 div);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_divns(const ktime_t kt, s64 div)
{




 do { if (__builtin_expect(!!(div < 0), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/ktime.h", 134, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 if (__builtin_constant_p(div) && !(div >> 32)) {
  s64 ns = kt;
  u64 tmp = ns < 0 ? -ns : ns;

  ({ uint32_t __base = (div); uint32_t __rem; (void)(((typeof((tmp)) *)0) == ((uint64_t *)0)); if (__builtin_constant_p(__base) && is_power_of_2(__base)) { __rem = (tmp) & (__base - 1); (tmp) >>= ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); } else if (__builtin_constant_p(__base) && __base != 0) { uint32_t __res_lo, __n_lo = (tmp); (tmp) = ({ uint64_t ___res, ___x, ___t, ___m, ___n = (tmp); uint32_t ___p, ___bias; ___p = 1 << ( __builtin_constant_p(__base) ? ((__base) < 2 ? 0 : 63 - __builtin_clzll(__base)) : (sizeof(__base) <= 4) ? __ilog2_u32(__base) : __ilog2_u64(__base) ); ___m = (~0ULL / __base) * ___p; ___m += (((~0ULL % __base + 1) * ___p) + __base - 1) / __base; ___x = ~0ULL / __base * __base - 1; ___res = ((___m & 0xffffffff) * (___x & 0xffffffff)) >> 32; ___t = ___res += (___m & 0xffffffff) * (___x >> 32); ___res += (___x & 0xffffffff) * (___m >> 32); ___t = (___res < ___t) ? (1ULL << 32) : 0; ___res = (___res >> 32) + ___t; ___res += (___m >> 32) * (___x >> 32); ___res /= ___p; if (~0ULL % (__base / (__base & -__base)) == 0) { ___n /= (__base & -__base); ___m = ~0ULL / (__base / (__base & -__base)); ___p = 1; ___bias = 1; } else if (___res != ___x / __base) { ___bias = 1; ___m = (~0ULL / __base) * ___p; ___m += ((~0ULL % __base + 1) * ___p) / __base; } else { uint32_t ___bits = -(___m & -___m); ___bits |= ___m >> 32; ___bits = (~___bits) << 1; if (!___bits) { ___p /= (___m & -___m); ___m /= (___m & -___m); } else { ___p >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); ___m >>= ( __builtin_constant_p(___bits) ? ((___bits) < 2 ? 0 : 63 - __builtin_clzll(___bits)) : (sizeof(___bits) <= 4) ? __ilog2_u32(___bits) : __ilog2_u64(___bits) ); } ___bias = 0; } ___res = __arch_xprod_64(___m, ___n, ___bias); ___res /= ___p; }); __res_lo = (tmp); __rem = __n_lo - __res_lo * __base; } else if (__builtin_expect(!!(((tmp) >> 32) == 0), 1)) { __rem = (uint32_t)(tmp) % __base; (tmp) = (uint32_t)(tmp) / __base; } else { __rem = __div64_32(&(tmp), __base); } __rem; });
  return ns < 0 ? -tmp : tmp;
 } else {
  return __ktime_divns(kt, div);
 }
}
# 157 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_to_us(const ktime_t kt)
{
 return ktime_divns(kt, 1000L);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_to_ms(const ktime_t kt)
{
 return ktime_divns(kt, 1000000L);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_us_delta(const ktime_t later, const ktime_t earlier)
{
       return ktime_to_us(((later) - (earlier)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 ktime_ms_delta(const ktime_t later, const ktime_t earlier)
{
 return ktime_to_ms(((later) - (earlier)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_add_us(const ktime_t kt, const u64 usec)
{
 return ((kt) + (usec * 1000L));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_add_ms(const ktime_t kt, const u64 msec)
{
 return ((kt) + (msec * 1000000L));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_sub_us(const ktime_t kt, const u64 usec)
{
 return ((kt) - (usec * 1000L));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_sub_ms(const ktime_t kt, const u64 msec)
{
 return ((kt) - (msec * 1000000L));
}

extern ktime_t ktime_add_safe(const ktime_t lhs, const ktime_t rhs);
# 207 "../include/linux/ktime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool ktime_to_timespec64_cond(const ktime_t kt,
             struct timespec64 *ts)
{
 if (kt) {
  *ts = ns_to_timespec64((kt));
  return true;
 } else {
  return false;
 }
}

# 1 "../include/vdso/ktime.h" 1
# 219 "../include/linux/ktime.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ns_to_ktime(u64 ns)
{
 return ns;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ms_to_ktime(u64 ms)
{
 return ms * 1000000L;
}

# 1 "../include/linux/timekeeping.h" 1





# 1 "../include/linux/clocksource_ids.h" 1





enum clocksource_ids {
 CSID_GENERIC = 0,
 CSID_ARM_ARCH_COUNTER,
 CSID_X86_TSC_EARLY,
 CSID_X86_TSC,
 CSID_X86_KVM_CLK,
 CSID_X86_ART,
 CSID_MAX,
};
# 7 "../include/linux/timekeeping.h" 2
# 1 "../include/linux/ktime.h" 1
# 8 "../include/linux/timekeeping.h" 2



void timekeeping_init(void);
extern int timekeeping_suspended;


extern void legacy_timer_tick(unsigned long ticks);




extern int do_settimeofday64(const struct timespec64 *ts);
extern int do_sys_settimeofday64(const struct timespec64 *tv,
     const struct timezone *tz);
# 42 "../include/linux/timekeeping.h"
extern void ktime_get_raw_ts64(struct timespec64 *ts);
extern void ktime_get_ts64(struct timespec64 *ts);
extern void ktime_get_real_ts64(struct timespec64 *tv);
extern void ktime_get_coarse_ts64(struct timespec64 *ts);
extern void ktime_get_coarse_real_ts64(struct timespec64 *ts);

void getboottime64(struct timespec64 *ts);




extern time64_t ktime_get_seconds(void);
extern time64_t __ktime_get_real_seconds(void);
extern time64_t ktime_get_real_seconds(void);





enum tk_offsets {
 TK_OFFS_REAL,
 TK_OFFS_BOOT,
 TK_OFFS_TAI,
 TK_OFFS_MAX,
};

extern ktime_t ktime_get(void);
extern ktime_t ktime_get_with_offset(enum tk_offsets offs);
extern ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs);
extern ktime_t ktime_mono_to_any(ktime_t tmono, enum tk_offsets offs);
extern ktime_t ktime_get_raw(void);
extern u32 ktime_get_resolution_ns(void);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_real(void)
{
 return ktime_get_with_offset(TK_OFFS_REAL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_coarse_real(void)
{
 return ktime_get_coarse_with_offset(TK_OFFS_REAL);
}
# 98 "../include/linux/timekeeping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_boottime(void)
{
 return ktime_get_with_offset(TK_OFFS_BOOT);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_coarse_boottime(void)
{
 return ktime_get_coarse_with_offset(TK_OFFS_BOOT);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_clocktai(void)
{
 return ktime_get_with_offset(TK_OFFS_TAI);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_coarse_clocktai(void)
{
 return ktime_get_coarse_with_offset(TK_OFFS_TAI);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_get_coarse(void)
{
 struct timespec64 ts;

 ktime_get_coarse_ts64(&ts);
 return timespec64_to_ktime(ts);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_coarse_ns(void)
{
 return ktime_to_ns(ktime_get_coarse());
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_coarse_real_ns(void)
{
 return ktime_to_ns(ktime_get_coarse_real());
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_coarse_boottime_ns(void)
{
 return ktime_to_ns(ktime_get_coarse_boottime());
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_coarse_clocktai_ns(void)
{
 return ktime_to_ns(ktime_get_coarse_clocktai());
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t ktime_mono_to_real(ktime_t mono)
{
 return ktime_mono_to_any(mono, TK_OFFS_REAL);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_ns(void)
{
 return ktime_to_ns(ktime_get());
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_real_ns(void)
{
 return ktime_to_ns(ktime_get_real());
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_boottime_ns(void)
{
 return ktime_to_ns(ktime_get_boottime());
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_clocktai_ns(void)
{
 return ktime_to_ns(ktime_get_clocktai());
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ktime_get_raw_ns(void)
{
 return ktime_to_ns(ktime_get_raw());
}

extern u64 ktime_get_mono_fast_ns(void);
extern u64 ktime_get_raw_fast_ns(void);
extern u64 ktime_get_boot_fast_ns(void);
extern u64 ktime_get_tai_fast_ns(void);
extern u64 ktime_get_real_fast_ns(void);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ktime_get_boottime_ts64(struct timespec64 *ts)
{
 *ts = ns_to_timespec64((ktime_get_boottime()));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ktime_get_coarse_boottime_ts64(struct timespec64 *ts)
{
 *ts = ns_to_timespec64((ktime_get_coarse_boottime()));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) time64_t ktime_get_boottime_seconds(void)
{
 return ktime_divns(ktime_get_coarse_boottime(), 1000000000L);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ktime_get_clocktai_ts64(struct timespec64 *ts)
{
 *ts = ns_to_timespec64((ktime_get_clocktai()));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ktime_get_coarse_clocktai_ts64(struct timespec64 *ts)
{
 *ts = ns_to_timespec64((ktime_get_coarse_clocktai()));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) time64_t ktime_get_clocktai_seconds(void)
{
 return ktime_divns(ktime_get_coarse_clocktai(), 1000000000L);
}




extern bool timekeeping_rtc_skipsuspend(void);
extern bool timekeeping_rtc_skipresume(void);

extern void timekeeping_inject_sleeptime64(const struct timespec64 *delta);







struct ktime_timestamps {
 u64 mono;
 u64 boot;
 u64 real;
};
# 283 "../include/linux/timekeeping.h"
struct system_time_snapshot {
 u64 cycles;
 ktime_t real;
 ktime_t raw;
 enum clocksource_ids cs_id;
 unsigned int clock_was_set_seq;
 u8 cs_was_changed_seq;
};
# 299 "../include/linux/timekeeping.h"
struct system_device_crosststamp {
 ktime_t device;
 ktime_t sys_realtime;
 ktime_t sys_monoraw;
};
# 315 "../include/linux/timekeeping.h"
struct system_counterval_t {
 u64 cycles;
 enum clocksource_ids cs_id;
 bool use_nsecs;
};

extern bool ktime_real_to_base_clock(ktime_t treal,
         enum clocksource_ids base_id, u64 *cycles);
extern bool timekeeping_clocksource_has_base(enum clocksource_ids id);




extern int get_device_system_crosststamp(
   int (*get_time_fn)(ktime_t *device_time,
    struct system_counterval_t *system_counterval,
    void *ctx),
   void *ctx,
   struct system_time_snapshot *history,
   struct system_device_crosststamp *xtstamp);




extern void ktime_get_snapshot(struct system_time_snapshot *systime_snapshot);


extern void ktime_get_fast_timestamps(struct ktime_timestamps *snap);




extern int persistent_clock_is_local;

extern void read_persistent_clock64(struct timespec64 *ts);
void read_persistent_wall_and_boot_offset(struct timespec64 *wall_clock,
       struct timespec64 *boot_offset);
# 231 "../include/linux/ktime.h" 2
# 7 "../include/linux/timer.h" 2

# 1 "../include/linux/debugobjects.h" 1







enum debug_obj_state {
 ODEBUG_STATE_NONE,
 ODEBUG_STATE_INIT,
 ODEBUG_STATE_INACTIVE,
 ODEBUG_STATE_ACTIVE,
 ODEBUG_STATE_DESTROYED,
 ODEBUG_STATE_NOTAVAILABLE,
 ODEBUG_STATE_MAX,
};

struct debug_obj_descr;
# 28 "../include/linux/debugobjects.h"
struct debug_obj {
 struct hlist_node node;
 enum debug_obj_state state;
 unsigned int astate;
 void *object;
 const struct debug_obj_descr *descr;
};
# 55 "../include/linux/debugobjects.h"
struct debug_obj_descr {
 const char *name;
 void *(*debug_hint)(void *addr);
 bool (*is_static_object)(void *addr);
 bool (*fixup_init)(void *addr, enum debug_obj_state state);
 bool (*fixup_activate)(void *addr, enum debug_obj_state state);
 bool (*fixup_destroy)(void *addr, enum debug_obj_state state);
 bool (*fixup_free)(void *addr, enum debug_obj_state state);
 bool (*fixup_assert_init)(void *addr, enum debug_obj_state state);
};


extern void debug_object_init (void *addr, const struct debug_obj_descr *descr);
extern void
debug_object_init_on_stack(void *addr, const struct debug_obj_descr *descr);
extern int debug_object_activate (void *addr, const struct debug_obj_descr *descr);
extern void debug_object_deactivate(void *addr, const struct debug_obj_descr *descr);
extern void debug_object_destroy (void *addr, const struct debug_obj_descr *descr);
extern void debug_object_free (void *addr, const struct debug_obj_descr *descr);
extern void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr);






extern void
debug_object_active_state(void *addr, const struct debug_obj_descr *descr,
     unsigned int expect, unsigned int next);

extern void debug_objects_early_init(void);
extern void debug_objects_mem_init(void);
# 110 "../include/linux/debugobjects.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
debug_check_no_obj_freed(const void *address, unsigned long size) { }
# 9 "../include/linux/timer.h" 2

# 1 "../include/linux/timer_types.h" 1







struct timer_list {




 struct hlist_node entry;
 unsigned long expires;
 void (*function)(struct timer_list *);
 u32 flags;


 struct lockdep_map lockdep_map;

};
# 11 "../include/linux/timer.h" 2
# 70 "../include/linux/timer.h"
void init_timer_key(struct timer_list *timer,
      void (*func)(struct timer_list *), unsigned int flags,
      const char *name, struct lock_class_key *key);


extern void init_timer_on_stack_key(struct timer_list *timer,
        void (*func)(struct timer_list *),
        unsigned int flags, const char *name,
        struct lock_class_key *key);
# 127 "../include/linux/timer.h"
extern void destroy_timer_on_stack(struct timer_list *timer);
# 145 "../include/linux/timer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int timer_pending(const struct timer_list * timer)
{
 return !hlist_unhashed_lockless(&timer->entry);
}

extern void add_timer_on(struct timer_list *timer, int cpu);
extern int mod_timer(struct timer_list *timer, unsigned long expires);
extern int mod_timer_pending(struct timer_list *timer, unsigned long expires);
extern int timer_reduce(struct timer_list *timer, unsigned long expires);







extern void add_timer(struct timer_list *timer);
extern void add_timer_local(struct timer_list *timer);
extern void add_timer_global(struct timer_list *timer);

extern int try_to_del_timer_sync(struct timer_list *timer);
extern int timer_delete_sync(struct timer_list *timer);
extern int timer_delete(struct timer_list *timer);
extern int timer_shutdown_sync(struct timer_list *timer);
extern int timer_shutdown(struct timer_list *timer);
# 183 "../include/linux/timer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int del_timer_sync(struct timer_list *timer)
{
 return timer_delete_sync(timer);
}
# 200 "../include/linux/timer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int del_timer(struct timer_list *timer)
{
 return timer_delete(timer);
}

extern void init_timers(void);
struct hrtimer;
extern enum hrtimer_restart it_real_fn(struct hrtimer *);

unsigned long __round_jiffies(unsigned long j, int cpu);
unsigned long __round_jiffies_relative(unsigned long j, int cpu);
unsigned long round_jiffies(unsigned long j);
unsigned long round_jiffies_relative(unsigned long j);

unsigned long __round_jiffies_up(unsigned long j, int cpu);
unsigned long __round_jiffies_up_relative(unsigned long j, int cpu);
unsigned long round_jiffies_up(unsigned long j);
unsigned long round_jiffies_up_relative(unsigned long j);
# 10 "../include/linux/workqueue.h" 2







# 1 "../include/linux/workqueue_types.h" 1
# 10 "../include/linux/workqueue_types.h"
struct workqueue_struct;

struct work_struct;
typedef void (*work_func_t)(struct work_struct *work);
void delayed_work_timer_fn(struct timer_list *t);

struct work_struct {
 atomic_long_t data;
 struct list_head entry;
 work_func_t func;

 struct lockdep_map lockdep_map;

};
# 18 "../include/linux/workqueue.h" 2







enum work_bits {
 WORK_STRUCT_PENDING_BIT = 0,
 WORK_STRUCT_INACTIVE_BIT,
 WORK_STRUCT_PWQ_BIT,
 WORK_STRUCT_LINKED_BIT,



 WORK_STRUCT_FLAG_BITS,


 WORK_STRUCT_COLOR_SHIFT = WORK_STRUCT_FLAG_BITS,
 WORK_STRUCT_COLOR_BITS = 4,
# 48 "../include/linux/workqueue.h"
 WORK_STRUCT_PWQ_SHIFT = WORK_STRUCT_COLOR_SHIFT + WORK_STRUCT_COLOR_BITS,
# 57 "../include/linux/workqueue.h"
 WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS,
 WORK_OFFQ_BH_BIT = WORK_OFFQ_FLAG_SHIFT,
 WORK_OFFQ_FLAG_END,
 WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT,

 WORK_OFFQ_DISABLE_SHIFT = WORK_OFFQ_FLAG_SHIFT + WORK_OFFQ_FLAG_BITS,
 WORK_OFFQ_DISABLE_BITS = 16,






 WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_DISABLE_SHIFT + WORK_OFFQ_DISABLE_BITS,
 WORK_OFFQ_LEFT = 32 - WORK_OFFQ_POOL_SHIFT,
 WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
};

enum work_flags {
 WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT,
 WORK_STRUCT_INACTIVE = 1 << WORK_STRUCT_INACTIVE_BIT,
 WORK_STRUCT_PWQ = 1 << WORK_STRUCT_PWQ_BIT,
 WORK_STRUCT_LINKED = 1 << WORK_STRUCT_LINKED_BIT,



 WORK_STRUCT_STATIC = 0,

};

enum wq_misc_consts {
 WORK_NR_COLORS = (1 << WORK_STRUCT_COLOR_BITS),


 WORK_CPU_UNBOUND = 1,


 WORK_BUSY_PENDING = 1 << 0,
 WORK_BUSY_RUNNING = 1 << 1,


 WORKER_DESC_LEN = 32,
};
# 113 "../include/linux/workqueue.h"
struct delayed_work {
 struct work_struct work;
 struct timer_list timer;


 struct workqueue_struct *wq;
 int cpu;
};

struct rcu_work {
 struct work_struct work;
 struct callback_head rcu;


 struct workqueue_struct *wq;
};

enum wq_affn_scope {
 WQ_AFFN_DFL,
 WQ_AFFN_CPU,
 WQ_AFFN_SMT,
 WQ_AFFN_CACHE,
 WQ_AFFN_NUMA,
 WQ_AFFN_SYSTEM,

 WQ_AFFN_NR_TYPES,
};






struct workqueue_attrs {



 int nice;
# 159 "../include/linux/workqueue.h"
 cpumask_var_t cpumask;
# 171 "../include/linux/workqueue.h"
 cpumask_var_t __pod_cpumask;
# 182 "../include/linux/workqueue.h"
 bool affn_strict;
# 203 "../include/linux/workqueue.h"
 enum wq_affn_scope affn_scope;




 bool ordered;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct delayed_work *to_delayed_work(struct work_struct *work)
{
 return ({ void *__mptr = (void *)(work); _Static_assert(__builtin_types_compatible_p(typeof(*(work)), typeof(((struct delayed_work *)0)->work)) || __builtin_types_compatible_p(typeof(*(work)), typeof(void)), "pointer type mismatch in container_of()"); ((struct delayed_work *)(__mptr - __builtin_offsetof(struct delayed_work, work))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rcu_work *to_rcu_work(struct work_struct *work)
{
 return ({ void *__mptr = (void *)(work); _Static_assert(__builtin_types_compatible_p(typeof(*(work)), typeof(((struct rcu_work *)0)->work)) || __builtin_types_compatible_p(typeof(*(work)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rcu_work *)(__mptr - __builtin_offsetof(struct rcu_work, work))); });
}

struct execute_work {
 struct work_struct work;
};
# 268 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __init_work(struct work_struct *work, int onstack) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void destroy_work_on_stack(struct work_struct *work) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void destroy_delayed_work_on_stack(struct delayed_work *work) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int work_static(struct work_struct *work) { return 0; }
# 369 "../include/linux/workqueue.h"
enum wq_flags {
 WQ_BH = 1 << 0,
 WQ_UNBOUND = 1 << 1,
 WQ_FREEZABLE = 1 << 2,
 WQ_MEM_RECLAIM = 1 << 3,
 WQ_HIGHPRI = 1 << 4,
 WQ_CPU_INTENSIVE = 1 << 5,
 WQ_SYSFS = 1 << 6,
# 403 "../include/linux/workqueue.h"
 WQ_POWER_EFFICIENT = 1 << 7,

 __WQ_DESTROYING = 1 << 15,
 __WQ_DRAINING = 1 << 16,
 __WQ_ORDERED = 1 << 17,
 __WQ_LEGACY = 1 << 18,


 __WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI,
};

enum wq_consts {
 WQ_MAX_ACTIVE = 512,
 WQ_UNBOUND_MAX_ACTIVE = WQ_MAX_ACTIVE,
 WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,






 WQ_DFL_MIN_ACTIVE = 8,
};
# 458 "../include/linux/workqueue.h"
extern struct workqueue_struct *system_wq;
extern struct workqueue_struct *system_highpri_wq;
extern struct workqueue_struct *system_long_wq;
extern struct workqueue_struct *system_unbound_wq;
extern struct workqueue_struct *system_freezable_wq;
extern struct workqueue_struct *system_power_efficient_wq;
extern struct workqueue_struct *system_freezable_power_efficient_wq;
extern struct workqueue_struct *system_bh_wq;
extern struct workqueue_struct *system_bh_highpri_wq;

void workqueue_softirq_action(bool highpri);
void workqueue_softirq_dead(unsigned int cpu);
# 507 "../include/linux/workqueue.h"
__attribute__((__format__(printf, 1, 4))) struct workqueue_struct *
alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
# 537 "../include/linux/workqueue.h"
extern void destroy_workqueue(struct workqueue_struct *wq);

struct workqueue_attrs *alloc_workqueue_attrs(void);
void free_workqueue_attrs(struct workqueue_attrs *attrs);
int apply_workqueue_attrs(struct workqueue_struct *wq,
     const struct workqueue_attrs *attrs);
extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);

extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
   struct work_struct *work);
extern bool queue_work_node(int node, struct workqueue_struct *wq,
       struct work_struct *work);
extern bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
   struct delayed_work *work, unsigned long delay);
extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
   struct delayed_work *dwork, unsigned long delay);
extern bool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork);

extern void __flush_workqueue(struct workqueue_struct *wq);
extern void drain_workqueue(struct workqueue_struct *wq);

extern int schedule_on_each_cpu(work_func_t func);

int execute_in_process_context(work_func_t fn, struct execute_work *);

extern bool flush_work(struct work_struct *work);
extern bool cancel_work(struct work_struct *work);
extern bool cancel_work_sync(struct work_struct *work);

extern bool flush_delayed_work(struct delayed_work *dwork);
extern bool cancel_delayed_work(struct delayed_work *dwork);
extern bool cancel_delayed_work_sync(struct delayed_work *dwork);

extern bool disable_work(struct work_struct *work);
extern bool disable_work_sync(struct work_struct *work);
extern bool enable_work(struct work_struct *work);

extern bool disable_delayed_work(struct delayed_work *dwork);
extern bool disable_delayed_work_sync(struct delayed_work *dwork);
extern bool enable_delayed_work(struct delayed_work *dwork);

extern bool flush_rcu_work(struct rcu_work *rwork);

extern void workqueue_set_max_active(struct workqueue_struct *wq,
         int max_active);
extern void workqueue_set_min_active(struct workqueue_struct *wq,
         int min_active);
extern struct work_struct *current_work(void);
extern bool current_is_workqueue_rescuer(void);
extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
extern unsigned int work_busy(struct work_struct *work);
extern __attribute__((__format__(printf, 1, 2))) void set_worker_desc(const char *fmt, ...);
extern void print_worker_info(const char *log_lvl, struct task_struct *task);
extern void show_all_workqueues(void);
extern void show_freezable_workqueues(void);
extern void show_one_workqueue(struct workqueue_struct *wq);
extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
# 618 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool queue_work(struct workqueue_struct *wq,
         struct work_struct *work)
{
 return queue_work_on(WORK_CPU_UNBOUND, wq, work);
}
# 632 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool queue_delayed_work(struct workqueue_struct *wq,
          struct delayed_work *dwork,
          unsigned long delay)
{
 return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
# 647 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mod_delayed_work(struct workqueue_struct *wq,
        struct delayed_work *dwork,
        unsigned long delay)
{
 return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
# 661 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool schedule_work_on(int cpu, struct work_struct *work)
{
 return queue_work_on(cpu, system_wq, work);
}
# 680 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool schedule_work(struct work_struct *work)
{
 return queue_work(system_wq, work);
}
# 701 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool enable_and_queue_work(struct workqueue_struct *wq,
      struct work_struct *work)
{
 if (enable_work(work)) {
  queue_work(wq, work);
  return true;
 }
 return false;
}
# 718 "../include/linux/workqueue.h"
extern void __warn_flushing_systemwide_wq(void)
 __attribute__((__warning__("Please avoid flushing system-wide workqueues.")));
# 759 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork,
         unsigned long delay)
{
 return queue_delayed_work_on(cpu, system_wq, dwork, delay);
}
# 773 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool schedule_delayed_work(struct delayed_work *dwork,
      unsigned long delay)
{
 return queue_delayed_work(system_wq, dwork, delay);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
{
 return fn(arg);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long work_on_cpu_safe(int cpu, long (*fn)(void *), void *arg)
{
 return fn(arg);
}
# 824 "../include/linux/workqueue.h"
int workqueue_sysfs_register(struct workqueue_struct *wq);
# 833 "../include/linux/workqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wq_watchdog_touch(int cpu) { }
# 842 "../include/linux/workqueue.h"
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) workqueue_init_early(void);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) workqueue_init(void);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) workqueue_init_topology(void);
# 20 "../include/linux/mm_types.h" 2

# 1 "../include/linux/percpu_counter.h" 1
# 14 "../include/linux/percpu_counter.h"
# 1 "../include/linux/percpu.h" 1




# 1 "../include/linux/alloc_tag.h" 1








# 1 "../include/linux/codetag.h" 1
# 10 "../include/linux/codetag.h"
struct codetag_iterator;
struct codetag_type;
struct codetag_module;
struct seq_buf;
struct module;






struct codetag {
 unsigned int flags;
 unsigned int lineno;
 const char *modname;
 const char *function;
 const char *filename;
} __attribute__((__aligned__(8)));

union codetag_ref {
 struct codetag *ct;
};

struct codetag_type_desc {
 const char *section;
 size_t tag_size;
 void (*module_load)(struct codetag_type *cttype,
       struct codetag_module *cmod);
 bool (*module_unload)(struct codetag_type *cttype,
         struct codetag_module *cmod);
};

struct codetag_iterator {
 struct codetag_type *cttype;
 struct codetag_module *cmod;
 unsigned long mod_id;
 struct codetag *ct;
};
# 63 "../include/linux/codetag.h"
void codetag_lock_module_list(struct codetag_type *cttype, bool lock);
bool codetag_trylock_module_list(struct codetag_type *cttype);
struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype);
struct codetag *codetag_next_ct(struct codetag_iterator *iter);

void codetag_to_text(struct seq_buf *out, struct codetag *ct);

struct codetag_type *
codetag_register_type(const struct codetag_type_desc *desc);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void codetag_load_module(struct module *mod) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool codetag_unload_module(struct module *mod) { return true; }
# 10 "../include/linux/alloc_tag.h" 2


# 1 "./arch/hexagon/include/generated/asm/percpu.h" 1
# 13 "../include/linux/alloc_tag.h" 2


# 1 "../include/linux/static_key.h" 1
# 16 "../include/linux/alloc_tag.h" 2


struct alloc_tag_counters {
 u64 bytes;
 u64 calls;
};






struct alloc_tag {
 struct codetag ct;
 struct alloc_tag_counters *counters;
} __attribute__((__aligned__(8)));
# 50 "../include/linux/alloc_tag.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_codetag_empty(union codetag_ref *ref) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_codetag_empty(union codetag_ref *ref) {}
# 195 "../include/linux/alloc_tag.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_alloc_profiling_enabled(void) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
     size_t bytes) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
# 6 "../include/linux/percpu.h" 2
# 1 "../include/linux/mmdebug.h" 1







struct page;
struct vm_area_struct;
struct mm_struct;
struct vma_iterator;

void dump_page(const struct page *page, const char *reason);
void dump_vma(const struct vm_area_struct *vma);
void dump_mm(const struct mm_struct *mm);
void vma_iter_dump_tree(const struct vma_iterator *vmi);
# 7 "../include/linux/percpu.h" 2





# 1 "../include/linux/sched.h" 1
# 10 "../include/linux/sched.h"
# 1 "../include/uapi/linux/sched.h" 1
# 92 "../include/uapi/linux/sched.h"
struct clone_args {
 __u64 __attribute__((aligned(8))) flags;
 __u64 __attribute__((aligned(8))) pidfd;
 __u64 __attribute__((aligned(8))) child_tid;
 __u64 __attribute__((aligned(8))) parent_tid;
 __u64 __attribute__((aligned(8))) exit_signal;
 __u64 __attribute__((aligned(8))) stack;
 __u64 __attribute__((aligned(8))) stack_size;
 __u64 __attribute__((aligned(8))) tls;
 __u64 __attribute__((aligned(8))) set_tid;
 __u64 __attribute__((aligned(8))) set_tid_size;
 __u64 __attribute__((aligned(8))) cgroup;
};
# 11 "../include/linux/sched.h" 2

# 1 "./arch/hexagon/include/generated/asm/current.h" 1
# 13 "../include/linux/sched.h" 2








# 1 "../include/linux/pid_types.h" 1




enum pid_type {
 PIDTYPE_PID,
 PIDTYPE_TGID,
 PIDTYPE_PGID,
 PIDTYPE_SID,
 PIDTYPE_MAX,
};

struct pid_namespace;
extern struct pid_namespace init_pid_ns;
# 22 "../include/linux/sched.h" 2
# 1 "../include/linux/sem_types.h" 1




struct sem_undo_list;

struct sysv_sem {

 struct sem_undo_list *undo_list;

};
# 23 "../include/linux/sched.h" 2
# 1 "../include/linux/shm.h" 1






# 1 "./arch/hexagon/include/generated/asm/shmparam.h" 1
# 1 "../include/asm-generic/shmparam.h" 1
# 2 "./arch/hexagon/include/generated/asm/shmparam.h" 2
# 8 "../include/linux/shm.h" 2

struct file;
struct task_struct;


struct sysv_shm {
 struct list_head shm_clist;
};

long do_shmat(int shmid, char *shmaddr, int shmflg, unsigned long *addr,
       unsigned long shmlba);
void exit_shm(struct task_struct *task);
# 24 "../include/linux/sched.h" 2
# 1 "../include/linux/kmsan_types.h" 1
# 18 "../include/linux/kmsan_types.h"
struct kmsan_context_state {
 char param_tls[800];
 char retval_tls[800];
 char va_arg_tls[800];
 char va_arg_origin_tls[800];
 u64 va_arg_overflow_size_tls;
 char param_origin_tls[800];
 u32 retval_origin_tls;
};




struct kmsan_ctx {
 struct kmsan_context_state cstate;
 int kmsan_in_runtime;
 unsigned int depth;
};
# 25 "../include/linux/sched.h" 2

# 1 "../include/linux/plist_types.h" 1






struct plist_head {
 struct list_head node_list;
};

struct plist_node {
 int prio;
 struct list_head prio_list;
 struct list_head node_list;
};
# 27 "../include/linux/sched.h" 2
# 1 "../include/linux/hrtimer_types.h" 1





# 1 "../include/linux/timerqueue_types.h" 1







struct timerqueue_node {
 struct rb_node node;
 ktime_t expires;
};

struct timerqueue_head {
 struct rb_root_cached rb_root;
};
# 7 "../include/linux/hrtimer_types.h" 2

struct hrtimer_clock_base;




enum hrtimer_restart {
 HRTIMER_NORESTART,
 HRTIMER_RESTART,
};
# 39 "../include/linux/hrtimer_types.h"
struct hrtimer {
 struct timerqueue_node node;
 ktime_t _softexpires;
 enum hrtimer_restart (*function)(struct hrtimer *);
 struct hrtimer_clock_base *base;
 u8 state;
 u8 is_rel;
 u8 is_soft;
 u8 is_hard;
};
# 28 "../include/linux/sched.h" 2

# 1 "../include/linux/seccomp_types.h" 1
# 30 "../include/linux/seccomp_types.h"
struct seccomp { };
struct seccomp_filter { };
# 30 "../include/linux/sched.h" 2


# 1 "../include/linux/resource.h" 1




# 1 "../include/uapi/linux/resource.h" 1
# 24 "../include/uapi/linux/resource.h"
struct rusage {
 struct __kernel_old_timeval ru_utime;
 struct __kernel_old_timeval ru_stime;
 __kernel_long_t ru_maxrss;
 __kernel_long_t ru_ixrss;
 __kernel_long_t ru_idrss;
 __kernel_long_t ru_isrss;
 __kernel_long_t ru_minflt;
 __kernel_long_t ru_majflt;
 __kernel_long_t ru_nswap;
 __kernel_long_t ru_inblock;
 __kernel_long_t ru_oublock;
 __kernel_long_t ru_msgsnd;
 __kernel_long_t ru_msgrcv;
 __kernel_long_t ru_nsignals;
 __kernel_long_t ru_nvcsw;
 __kernel_long_t ru_nivcsw;
};

struct rlimit {
 __kernel_ulong_t rlim_cur;
 __kernel_ulong_t rlim_max;
};



struct rlimit64 {
 __u64 rlim_cur;
 __u64 rlim_max;
};
# 85 "../include/uapi/linux/resource.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/resource.h" 1
# 1 "../include/asm-generic/resource.h" 1




# 1 "../include/uapi/asm-generic/resource.h" 1
# 6 "../include/asm-generic/resource.h" 2
# 2 "./arch/hexagon/include/generated/uapi/asm/resource.h" 2
# 86 "../include/uapi/linux/resource.h" 2
# 6 "../include/linux/resource.h" 2


struct task_struct;

void getrusage(struct task_struct *p, int who, struct rusage *ru);
# 33 "../include/linux/sched.h" 2
# 1 "../include/linux/latencytop.h" 1
# 14 "../include/linux/latencytop.h"
struct task_struct;
# 43 "../include/linux/latencytop.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
account_scheduler_latency(struct task_struct *task, int usecs, int inter)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_tsk_latency_tracing(struct task_struct *p)
{
}
# 34 "../include/linux/sched.h" 2
# 1 "../include/linux/sched/prio.h" 1
# 32 "../include/linux/sched/prio.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long nice_to_rlimit(long nice)
{
 return (19 - nice + 1);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long rlimit_to_nice(long prio)
{
 return (19 - prio + 1);
}
# 35 "../include/linux/sched.h" 2
# 1 "../include/linux/sched/types.h" 1
# 17 "../include/linux/sched/types.h"
struct task_cputime {
 u64 stime;
 u64 utime;
 unsigned long long sum_exec_runtime;
};
# 36 "../include/linux/sched.h" 2
# 1 "../include/linux/signal_types.h" 1
# 10 "../include/linux/signal_types.h"
# 1 "../include/uapi/linux/signal.h" 1




# 1 "../arch/hexagon/include/uapi/asm/signal.h" 1
# 23 "../arch/hexagon/include/uapi/asm/signal.h"
extern unsigned long __rt_sigtramp_template[2];

void do_signal(struct pt_regs *regs);

# 1 "../include/asm-generic/signal.h" 1




# 1 "../include/uapi/asm-generic/signal.h" 1
# 61 "../include/uapi/asm-generic/signal.h"
typedef struct {
 unsigned long sig[(64 / (8 * 4))];
} sigset_t;


typedef unsigned long old_sigset_t;

# 1 "../include/uapi/asm-generic/signal-defs.h" 1
# 82 "../include/uapi/asm-generic/signal-defs.h"
typedef void __signalfn_t(int);
typedef __signalfn_t *__sighandler_t;

typedef void __restorefn_t(void);
typedef __restorefn_t *__sigrestore_t;
# 69 "../include/uapi/asm-generic/signal.h" 2
# 85 "../include/uapi/asm-generic/signal.h"
typedef struct sigaltstack {
 void *ss_sp;
 int ss_flags;
 __kernel_size_t ss_size;
} stack_t;
# 6 "../include/asm-generic/signal.h" 2



# 1 "../arch/hexagon/include/uapi/asm/sigcontext.h" 1
# 23 "../arch/hexagon/include/uapi/asm/sigcontext.h"
# 1 "../arch/hexagon/include/uapi/asm/user.h" 1
# 13 "../arch/hexagon/include/uapi/asm/user.h"
struct user_regs_struct {
 unsigned long r0;
 unsigned long r1;
 unsigned long r2;
 unsigned long r3;
 unsigned long r4;
 unsigned long r5;
 unsigned long r6;
 unsigned long r7;
 unsigned long r8;
 unsigned long r9;
 unsigned long r10;
 unsigned long r11;
 unsigned long r12;
 unsigned long r13;
 unsigned long r14;
 unsigned long r15;
 unsigned long r16;
 unsigned long r17;
 unsigned long r18;
 unsigned long r19;
 unsigned long r20;
 unsigned long r21;
 unsigned long r22;
 unsigned long r23;
 unsigned long r24;
 unsigned long r25;
 unsigned long r26;
 unsigned long r27;
 unsigned long r28;
 unsigned long r29;
 unsigned long r30;
 unsigned long r31;
 unsigned long sa0;
 unsigned long lc0;
 unsigned long sa1;
 unsigned long lc1;
 unsigned long m0;
 unsigned long m1;
 unsigned long usr;
 unsigned long p3_0;
 unsigned long gp;
 unsigned long ugp;
 unsigned long pc;
 unsigned long cause;
 unsigned long badva;

 unsigned long cs0;
 unsigned long cs1;
 unsigned long pad1;
};
# 24 "../arch/hexagon/include/uapi/asm/sigcontext.h" 2






struct sigcontext {
 struct user_regs_struct sc_regs;
} __attribute__((__aligned__(8)));
# 10 "../include/asm-generic/signal.h" 2
# 28 "../arch/hexagon/include/uapi/asm/signal.h" 2
# 6 "../include/uapi/linux/signal.h" 2
# 1 "./arch/hexagon/include/generated/uapi/asm/siginfo.h" 1
# 1 "../include/uapi/asm-generic/siginfo.h" 1







typedef union sigval {
 int sival_int;
 void *sival_ptr;
} sigval_t;
# 37 "../include/uapi/asm-generic/siginfo.h"
union __sifields {

 struct {
  __kernel_pid_t _pid;
  __kernel_uid32_t _uid;
 } _kill;


 struct {
  __kernel_timer_t _tid;
  int _overrun;
  sigval_t _sigval;
  int _sys_private;
 } _timer;


 struct {
  __kernel_pid_t _pid;
  __kernel_uid32_t _uid;
  sigval_t _sigval;
 } _rt;


 struct {
  __kernel_pid_t _pid;
  __kernel_uid32_t _uid;
  int _status;
  __kernel_clock_t _utime;
  __kernel_clock_t _stime;
 } _sigchld;


 struct {
  void *_addr;



  union {

   int _trapno;




   short _addr_lsb;

   struct {
    char _dummy_bnd[(__alignof__(void *) < sizeof(short) ? sizeof(short) : __alignof__(void *))];
    void *_lower;
    void *_upper;
   } _addr_bnd;

   struct {
    char _dummy_pkey[(__alignof__(void *) < sizeof(short) ? sizeof(short) : __alignof__(void *))];
    __u32 _pkey;
   } _addr_pkey;

   struct {
    unsigned long _data;
    __u32 _type;
    __u32 _flags;
   } _perf;
  };
 } _sigfault;


 struct {
  long _band;
  int _fd;
 } _sigpoll;


 struct {
  void *_call_addr;
  int _syscall;
  unsigned int _arch;
 } _sigsys;
};
# 134 "../include/uapi/asm-generic/siginfo.h"
typedef struct siginfo {
 union {
  struct { int si_signo; int si_errno; int si_code; union __sifields _sifields; };
  int _si_pad[128/sizeof(int)];
 };
} siginfo_t;
# 336 "../include/uapi/asm-generic/siginfo.h"
typedef struct sigevent {
 sigval_t sigev_value;
 int sigev_signo;
 int sigev_notify;
 union {
  int _pad[((64 - (sizeof(int) * 2 + sizeof(sigval_t))) / sizeof(int))];
   int _tid;

  struct {
   void (*_function)(sigval_t);
   void *_attribute;
  } _sigev_thread;
 } _sigev_un;
} sigevent_t;
# 2 "./arch/hexagon/include/generated/uapi/asm/siginfo.h" 2
# 7 "../include/uapi/linux/signal.h" 2
# 11 "../include/linux/signal_types.h" 2

typedef struct kernel_siginfo {
 struct { int si_signo; int si_errno; int si_code; union __sifields _sifields; };
} kernel_siginfo_t;

struct ucounts;





struct sigqueue {
 struct list_head list;
 int flags;
 kernel_siginfo_t info;
 struct ucounts *ucounts;
};




struct sigpending {
 struct list_head list;
 sigset_t signal;
};

struct sigaction {

 __sighandler_t sa_handler;
 unsigned long sa_flags;







 sigset_t sa_mask;
};

struct k_sigaction {
 struct sigaction sa;



};
# 67 "../include/linux/signal_types.h"
struct ksignal {
 struct k_sigaction ka;
 kernel_siginfo_t info;
 int sig;
};
# 37 "../include/linux/sched.h" 2
# 1 "../include/linux/syscall_user_dispatch_types.h" 1
# 18 "../include/linux/syscall_user_dispatch_types.h"
struct syscall_user_dispatch {};
# 38 "../include/linux/sched.h" 2

# 1 "../include/linux/netdevice_xmit.h" 1




struct netdev_xmit {
 u16 recursion;
 u8 more;

 u8 skip_txqueue;

};
# 40 "../include/linux/sched.h" 2
# 1 "../include/linux/task_io_accounting.h" 1
# 12 "../include/linux/task_io_accounting.h"
struct task_io_accounting {
# 46 "../include/linux/task_io_accounting.h"
};
# 41 "../include/linux/sched.h" 2
# 1 "../include/linux/posix-timers_types.h" 1
# 41 "../include/linux/posix-timers_types.h"
struct posix_cputimer_base {
 u64 nextevt;
 struct timerqueue_head tqhead;
};
# 56 "../include/linux/posix-timers_types.h"
struct posix_cputimers {
 struct posix_cputimer_base bases[3];
 unsigned int timers_active;
 unsigned int expiry_active;
};







struct posix_cputimers_work {
 struct callback_head work;
 struct mutex mutex;
 unsigned int scheduled;
};
# 42 "../include/linux/sched.h" 2

# 1 "../include/uapi/linux/rseq.h" 1
# 16 "../include/uapi/linux/rseq.h"
enum rseq_cpu_id_state {
 RSEQ_CPU_ID_UNINITIALIZED = -1,
 RSEQ_CPU_ID_REGISTRATION_FAILED = -2,
};

enum rseq_flags {
 RSEQ_FLAG_UNREGISTER = (1 << 0),
};

enum rseq_cs_flags_bit {
 RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT = 0,
 RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT = 1,
 RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT = 2,
};

enum rseq_cs_flags {
 RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT =
  (1U << RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT),
 RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL =
  (1U << RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT),
 RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE =
  (1U << RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT),
};






struct rseq_cs {

 __u32 version;

 __u32 flags;
 __u64 start_ip;

 __u64 post_commit_offset;
 __u64 abort_ip;
} __attribute__((aligned(4 * sizeof(__u64))));







struct rseq {
# 75 "../include/uapi/linux/rseq.h"
 __u32 cpu_id_start;
# 90 "../include/uapi/linux/rseq.h"
 __u32 cpu_id;
# 112 "../include/uapi/linux/rseq.h"
 __u64 rseq_cs;
# 132 "../include/uapi/linux/rseq.h"
 __u32 flags;







 __u32 node_id;
# 149 "../include/uapi/linux/rseq.h"
 __u32 mm_cid;




 char end[];
} __attribute__((aligned(4 * sizeof(__u64))));
# 44 "../include/linux/sched.h" 2

# 1 "../include/linux/kcsan.h" 1
# 71 "../include/linux/kcsan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcsan_init(void) { }
# 46 "../include/linux/sched.h" 2
# 1 "../include/linux/rv.h" 1
# 47 "../include/linux/sched.h" 2
# 1 "../include/linux/livepatch_sched.h" 1
# 25 "../include/linux/livepatch_sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void klp_sched_try_switch(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __klp_sched_try_switch(void) {}
# 48 "../include/linux/sched.h" 2
# 1 "../include/linux/uidgid_types.h" 1






typedef struct {
 uid_t val;
} kuid_t;

typedef struct {
 gid_t val;
} kgid_t;
# 49 "../include/linux/sched.h" 2
# 1 "./arch/hexagon/include/generated/asm/kmap_size.h" 1
# 1 "../include/asm-generic/kmap_size.h" 1
# 2 "./arch/hexagon/include/generated/asm/kmap_size.h" 2
# 50 "../include/linux/sched.h" 2


struct audit_context;
struct bio_list;
struct blk_plug;
struct bpf_local_storage;
struct bpf_run_ctx;
struct bpf_net_context;
struct capture_control;
struct cfs_rq;
struct fs_struct;
struct futex_pi_state;
struct io_context;
struct io_uring_task;
struct mempolicy;
struct nameidata;
struct nsproxy;
struct perf_event_context;
struct pid_namespace;
struct pipe_inode_info;
struct rcu_node;
struct reclaim_state;
struct robust_list_head;
struct root_domain;
struct rq;
struct sched_attr;
struct sched_dl_entity;
struct seq_file;
struct sighand_struct;
struct signal_struct;
struct task_delay_info;
struct task_group;
struct task_struct;
struct user_event_mm;

# 1 "../include/linux/sched/ext.h" 1
# 202 "../include/linux/sched/ext.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_ext_free(struct task_struct *p) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_scx_info(const char *log_lvl, struct task_struct *p) {}
# 86 "../include/linux/sched.h" 2
# 304 "../include/linux/sched.h"
enum {
 TASK_COMM_LEN = 16,
};

extern void sched_tick(void);



extern long schedule_timeout(long timeout);
extern long schedule_timeout_interruptible(long timeout);
extern long schedule_timeout_killable(long timeout);
extern long schedule_timeout_uninterruptible(long timeout);
extern long schedule_timeout_idle(long timeout);
           void schedule(void);
extern void schedule_preempt_disabled(void);
           void preempt_schedule_irq(void);




extern int __attribute__((__warn_unused_result__)) io_schedule_prepare(void);
extern void io_schedule_finish(int token);
extern long io_schedule_timeout(long timeout);
extern void io_schedule(void);
# 338 "../include/linux/sched.h"
struct prev_cputime {

 u64 utime;
 u64 stime;
 raw_spinlock_t lock;

};

enum vtime_state {

 VTIME_INACTIVE = 0,

 VTIME_IDLE,

 VTIME_SYS,

 VTIME_USER,

 VTIME_GUEST,
};

struct vtime {
 seqcount_t seqcount;
 unsigned long long starttime;
 enum vtime_state state;
 unsigned int cpu;
 u64 utime;
 u64 stime;
 u64 gtime;
};







enum uclamp_id {
 UCLAMP_MIN = 0,
 UCLAMP_MAX,
 UCLAMP_CNT
};






struct sched_param {
 int sched_priority;
};

struct sched_info {
# 409 "../include/linux/sched.h"
};
# 425 "../include/linux/sched.h"
struct load_weight {
 unsigned long weight;
 u32 inv_weight;
};
# 475 "../include/linux/sched.h"
struct sched_avg {
 u64 last_update_time;
 u64 load_sum;
 u64 runnable_sum;
 u32 util_sum;
 u32 period_contrib;
 unsigned long load_avg;
 unsigned long runnable_avg;
 unsigned long util_avg;
 unsigned int util_est;
} __attribute__((__aligned__((1 << (5)))));
# 498 "../include/linux/sched.h"
struct sched_statistics {
# 538 "../include/linux/sched.h"
} __attribute__((__aligned__((1 << (5)))));

struct sched_entity {

 struct load_weight load;
 struct rb_node run_node;
 u64 deadline;
 u64 min_vruntime;

 struct list_head group_node;
 unsigned int on_rq;

 u64 exec_start;
 u64 sum_exec_runtime;
 u64 prev_sum_exec_runtime;
 u64 vruntime;
 s64 vlag;
 u64 slice;

 u64 nr_migrations;
# 579 "../include/linux/sched.h"
};

struct sched_rt_entity {
 struct list_head run_list;
 unsigned long timeout;
 unsigned long watchdog_stamp;
 unsigned int time_slice;
 unsigned short on_rq;
 unsigned short on_list;

 struct sched_rt_entity *back;







} ;

typedef bool (*dl_server_has_tasks_f)(struct sched_dl_entity *);
typedef struct task_struct *(*dl_server_pick_f)(struct sched_dl_entity *);

struct sched_dl_entity {
 struct rb_node rb_node;






 u64 dl_runtime;
 u64 dl_deadline;
 u64 dl_period;
 u64 dl_bw;
 u64 dl_density;






 s64 runtime;
 u64 deadline;
 unsigned int flags;
# 645 "../include/linux/sched.h"
 unsigned int dl_throttled : 1;
 unsigned int dl_yielded : 1;
 unsigned int dl_non_contending : 1;
 unsigned int dl_overrun : 1;
 unsigned int dl_server : 1;





 struct hrtimer dl_timer;
# 664 "../include/linux/sched.h"
 struct hrtimer inactive_timer;
# 675 "../include/linux/sched.h"
 struct rq *rq;
 dl_server_has_tasks_f server_has_tasks;
 dl_server_pick_f server_pick;







 struct sched_dl_entity *pi_se;

};
# 724 "../include/linux/sched.h"
union rcu_special {
 struct {
  u8 blocked;
  u8 need_qs;
  u8 exp_hint;
  u8 need_mb;
 } b;
 u32 s;
};

enum perf_event_task_context {
 perf_invalid_context = -1,
 perf_hw_context = 0,
 perf_sw_context,
 perf_nr_task_contexts,
};







struct wake_q_node {
 struct wake_q_node *next;
};

struct kmap_ctrl {




};

struct task_struct {







 unsigned int __state;


 unsigned int saved_state;







 void *stack;
 refcount_t usage;

 unsigned int flags;
 unsigned int ptrace;
# 804 "../include/linux/sched.h"
 int on_rq;

 int prio;
 int static_prio;
 int normal_prio;
 unsigned int rt_priority;

 struct sched_entity se;
 struct sched_rt_entity rt;
 struct sched_dl_entity dl;
 struct sched_dl_entity *dl_server;



 const struct sched_class *sched_class;
# 844 "../include/linux/sched.h"
 struct sched_statistics stats;
# 855 "../include/linux/sched.h"
 unsigned int policy;
 unsigned long max_allowed_capacity;
 int nr_cpus_allowed;
 const cpumask_t *cpus_ptr;
 cpumask_t *user_cpus_ptr;
 cpumask_t cpus_mask;
 void *migration_pending;



 unsigned short migration_flags;
# 885 "../include/linux/sched.h"
 int trc_reader_nesting;
 int trc_ipi_to_cpu;
 union rcu_special trc_reader_special;
 struct list_head trc_holdout_list;
 struct list_head trc_blkd_node;
 int trc_blkd_cpu;


 struct sched_info sched_info;

 struct list_head tasks;





 struct mm_struct *mm;
 struct mm_struct *active_mm;
 struct address_space *faults_disabled_mapping;

 int exit_state;
 int exit_code;
 int exit_signal;

 int pdeath_signal;

 unsigned long jobctl;


 unsigned int personality;


 unsigned sched_reset_on_fork:1;
 unsigned sched_contributes_to_load:1;
 unsigned sched_migrated:1;


 unsigned :0;
# 939 "../include/linux/sched.h"
 unsigned sched_remote_wakeup:1;

 unsigned sched_rt_mutex:1;



 unsigned in_execve:1;
 unsigned in_iowait:1;
# 955 "../include/linux/sched.h"
 unsigned in_lru_fault:1;
# 971 "../include/linux/sched.h"
 unsigned in_memstall:1;



 unsigned in_page_owner:1;
# 994 "../include/linux/sched.h"
 unsigned long atomic_flags;

 struct restart_block restart_block;

 pid_t pid;
 pid_t tgid;
# 1012 "../include/linux/sched.h"
 struct task_struct *real_parent;


 struct task_struct *parent;




 struct list_head children;
 struct list_head sibling;
 struct task_struct *group_leader;







 struct list_head ptraced;
 struct list_head ptrace_entry;


 struct pid *thread_pid;
 struct hlist_node pid_links[PIDTYPE_MAX];
 struct list_head thread_node;

 struct completion *vfork_done;


 int *set_child_tid;


 int *clear_child_tid;


 void *worker_private;

 u64 utime;
 u64 stime;




 u64 gtime;
 struct prev_cputime prev_cputime;
# 1065 "../include/linux/sched.h"
 unsigned long nvcsw;
 unsigned long nivcsw;


 u64 start_time;


 u64 start_boottime;


 unsigned long min_flt;
 unsigned long maj_flt;


 struct posix_cputimers posix_cputimers;
# 1088 "../include/linux/sched.h"
 const struct cred *ptracer_cred;


 const struct cred *real_cred;


 const struct cred *cred;



 struct key *cached_requested_key;
# 1108 "../include/linux/sched.h"
 char comm[TASK_COMM_LEN];

 struct nameidata *nameidata;


 struct sysv_sem sysvsem;
 struct sysv_shm sysvshm;






 struct fs_struct *fs;


 struct files_struct *files;






 struct nsproxy *nsproxy;


 struct signal_struct *signal;
 struct sighand_struct *sighand;
 sigset_t blocked;
 sigset_t real_blocked;

 sigset_t saved_sigmask;
 struct sigpending pending;
 unsigned long sas_ss_sp;
 size_t sas_ss_size;
 unsigned int sas_ss_flags;

 struct callback_head *task_works;
# 1154 "../include/linux/sched.h"
 struct seccomp seccomp;
 struct syscall_user_dispatch syscall_dispatch;


 u64 parent_exec_id;
 u64 self_exec_id;


 spinlock_t alloc_lock;


 raw_spinlock_t pi_lock;

 struct wake_q_node wake_q;



 struct rb_root_cached pi_waiters;

 struct task_struct *pi_top_task;

 struct rt_mutex_waiter *pi_blocked_on;




 struct mutex_waiter *blocked_on;







 struct irqtrace_events irqtrace;
 unsigned int hardirq_threaded;
 u64 hardirq_chain_key;
 int softirqs_enabled;
 int softirq_context;
 int irq_config;







 u64 curr_chain_key;
 int lockdep_depth;
 unsigned int lockdep_recursion;
 struct held_lock held_locks[48UL];







 void *journal_info;


 struct bio_list *bio_list;


 struct blk_plug *plug;


 struct reclaim_state *reclaim_state;

 struct io_context *io_context;


 struct capture_control *capture_control;


 unsigned long ptrace_message;
 kernel_siginfo_t *last_siginfo;

 struct task_io_accounting ioac;


 unsigned int psi_flags;
# 1264 "../include/linux/sched.h"
 struct robust_list_head *robust_list;



 struct list_head pi_state_list;
 struct futex_pi_state *pi_state_cache;
 struct mutex futex_exit_mutex;
 unsigned int futex_state;
# 1358 "../include/linux/sched.h"
 struct tlbflush_unmap_batch tlb_ubc;


 struct pipe_inode_info *splice_pipe;

 struct page_frag task_frag;
# 1377 "../include/linux/sched.h"
 int nr_dirtied;
 int nr_dirtied_pause;

 unsigned long dirty_paused_when;
# 1390 "../include/linux/sched.h"
 u64 timer_slack_ns;
 u64 default_timer_slack_ns;
# 1412 "../include/linux/sched.h"
 struct kunit *kunit_test;
# 1438 "../include/linux/sched.h"
 unsigned long trace_recursion;
# 1492 "../include/linux/sched.h"
 struct kmap_ctrl kmap_ctrl;






 struct callback_head rcu;
 refcount_t rcu_users;
 int pagefault_disabled;

 struct task_struct *oom_reaper_list;
 struct timer_list oom_reaper_timer;
# 1522 "../include/linux/sched.h"
 struct bpf_local_storage *bpf_storage;

 struct bpf_run_ctx *bpf_ctx;


 struct bpf_net_context *bpf_net_context;
# 1577 "../include/linux/sched.h"
 struct thread_struct thread;






} __attribute__ ((aligned (64)));




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __task_state_index(unsigned int tsk_state,
           unsigned int tsk_exit_state)
{
 unsigned int state = (tsk_state | tsk_exit_state) & (0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040);

 do { __attribute__((__noreturn__)) extern void __compiletime_assert_79(void) __attribute__((__error__("BUILD_BUG_ON failed: " "((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) == 0 || ((((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) & (((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) - 1)) != 0)"))); if (!(!(((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) == 0 || ((((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) & (((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1)) - 1)) != 0)))) __compiletime_assert_79(); } while (0);

 if ((tsk_state & (0x00000002 | 0x00000400)) == (0x00000002 | 0x00000400))
  state = ((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1);






 if (tsk_state & 0x00001000)
  state = 0x00000002;

 return fls(state);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int task_state_index(struct task_struct *tsk)
{
 return __task_state_index(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_80(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(tsk->__state) == sizeof(char) || sizeof(tsk->__state) == sizeof(short) || sizeof(tsk->__state) == sizeof(int) || sizeof(tsk->__state) == sizeof(long)) || sizeof(tsk->__state) == sizeof(long long))) __compiletime_assert_80(); } while (0); (*(const volatile typeof( _Generic((tsk->__state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (tsk->__state))) *)&(tsk->__state)); }), tsk->exit_state);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char task_index_to_char(unsigned int state)
{
 static const char state_char[] = "RSDTtXZPI";

 do { __attribute__((__noreturn__)) extern void __compiletime_assert_81(void) __attribute__((__error__("BUILD_BUG_ON failed: " "TASK_REPORT_MAX * 2 != 1 << (sizeof(state_char) - 1)"))); if (!(!((((0x00000000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000008 | 0x00000010 | 0x00000020 | 0x00000040) + 1) << 1) * 2 != 1 << (sizeof(state_char) - 1)))) __compiletime_assert_81(); } while (0);

 return state_char[state];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char task_state_to_char(struct task_struct *tsk)
{
 return task_index_to_char(task_state_index(tsk));
}

extern struct pid *cad_pid;
# 1697 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool is_percpu_thread(void)
{




 return true;

}
# 1729 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_no_new_privs(struct task_struct *p) { return ((__builtin_constant_p(0) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(0, &p->atomic_flags) : arch_test_bit(0, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_no_new_privs(struct task_struct *p) { set_bit(0, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spread_page(struct task_struct *p) { return ((__builtin_constant_p(1) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(1, &p->atomic_flags) : arch_test_bit(1, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spread_page(struct task_struct *p) { set_bit(1, &p->atomic_flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_clear_spread_page(struct task_struct *p) { clear_bit(1, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spread_slab(struct task_struct *p) { return ((__builtin_constant_p(2) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(2, &p->atomic_flags) : arch_test_bit(2, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spread_slab(struct task_struct *p) { set_bit(2, &p->atomic_flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_clear_spread_slab(struct task_struct *p) { clear_bit(2, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spec_ssb_disable(struct task_struct *p) { return ((__builtin_constant_p(3) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(3, &p->atomic_flags) : arch_test_bit(3, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spec_ssb_disable(struct task_struct *p) { set_bit(3, &p->atomic_flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_clear_spec_ssb_disable(struct task_struct *p) { clear_bit(3, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spec_ssb_noexec(struct task_struct *p) { return ((__builtin_constant_p(7) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(7, &p->atomic_flags) : arch_test_bit(7, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spec_ssb_noexec(struct task_struct *p) { set_bit(7, &p->atomic_flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_clear_spec_ssb_noexec(struct task_struct *p) { clear_bit(7, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spec_ssb_force_disable(struct task_struct *p) { return ((__builtin_constant_p(4) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(4, &p->atomic_flags) : arch_test_bit(4, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spec_ssb_force_disable(struct task_struct *p) { set_bit(4, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spec_ib_disable(struct task_struct *p) { return ((__builtin_constant_p(5) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(5, &p->atomic_flags) : arch_test_bit(5, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spec_ib_disable(struct task_struct *p) { set_bit(5, &p->atomic_flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_clear_spec_ib_disable(struct task_struct *p) { clear_bit(5, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_spec_ib_force_disable(struct task_struct *p) { return ((__builtin_constant_p(6) && __builtin_constant_p((uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&p->atomic_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&p->atomic_flags))) ? const_test_bit(6, &p->atomic_flags) : arch_test_bit(6, &p->atomic_flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_set_spec_ib_force_disable(struct task_struct *p) { set_bit(6, &p->atomic_flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
current_restore_flags(unsigned long orig_flags, unsigned long flags)
{
 (__current_thread_info->task)->flags &= ~flags;
 (__current_thread_info->task)->flags |= orig_flags & flags;
}

extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
extern int task_can_attach(struct task_struct *p);
extern int dl_bw_alloc(int cpu, u64 dl_bw);
extern void dl_bw_free(int cpu, u64 dl_bw);
# 1788 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
{

 if ((*((new_mask)->bits) & 1) == 0)
  return -22;
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node)
{
 if (src->user_cpus_ptr)
  return -22;
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void release_user_cpus_ptr(struct task_struct *p)
{
 ({ int __ret_warn_on = !!(p->user_cpus_ptr); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/sched.h", 1806, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
{
 return 0;
}


extern int yield_to(struct task_struct *p, bool preempt);
extern void set_user_nice(struct task_struct *p, long nice);
extern int task_prio(const struct task_struct *p);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_nice(const struct task_struct *p)
{
 return (((p)->static_prio) - (100 + (19 - -20 + 1) / 2));
}

extern int can_nice(const struct task_struct *p, const int nice);
extern int task_curr(const struct task_struct *p);
extern int idle_cpu(int cpu);
extern int available_idle_cpu(int cpu);
extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *);
extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *);
extern void sched_set_fifo(struct task_struct *p);
extern void sched_set_fifo_low(struct task_struct *p);
extern void sched_set_normal(struct task_struct *p, int nice);
extern int sched_setattr(struct task_struct *, const struct sched_attr *);
extern int sched_setattr_nocheck(struct task_struct *, const struct sched_attr *);
extern struct task_struct *idle_task(int cpu);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool is_idle_task(const struct task_struct *p)
{
 return !!(p->flags & 0x00000002);
}

extern struct task_struct *curr_task(int cpu);
extern void ia64_set_curr_task(int cpu, struct task_struct *p);

void yield(void);

union thread_union {
 struct task_struct task;

 struct thread_info thread_info;

 unsigned long stack[(1<<12)/sizeof(long)];
};


extern struct thread_info init_thread_info;


extern unsigned long init_stack[(1<<12) / sizeof(unsigned long)];
# 1890 "../include/linux/sched.h"
extern struct task_struct *find_task_by_vpid(pid_t nr);
extern struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *ns);




extern struct task_struct *find_get_task_by_vpid(pid_t nr);

extern int wake_up_state(struct task_struct *tsk, unsigned int state);
extern int wake_up_process(struct task_struct *tsk);
extern void wake_up_new_task(struct task_struct *tsk);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kick_process(struct task_struct *tsk) { }


extern void __set_task_comm(struct task_struct *tsk, const char *from, bool exec);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_task_comm(struct task_struct *tsk, const char *from)
{
 __set_task_comm(tsk, from, false);
}

extern char *__get_task_comm(char *to, size_t len, struct task_struct *tsk);
# 1932 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scheduler_ipi(void) { }


extern unsigned long wait_task_inactive(struct task_struct *, unsigned int match_state);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_tsk_thread_flag(struct task_struct *tsk, int flag)
{
 set_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_tsk_thread_flag(struct task_struct *tsk, int flag)
{
 clear_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_tsk_thread_flag(struct task_struct *tsk, int flag,
       bool value)
{
 update_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag, value);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_set_tsk_thread_flag(struct task_struct *tsk, int flag)
{
 return test_and_set_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_and_clear_tsk_thread_flag(struct task_struct *tsk, int flag)
{
 return test_and_clear_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_tsk_thread_flag(struct task_struct *tsk, int flag)
{
 return test_ti_thread_flag(((struct thread_info *)(tsk)->stack), flag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_tsk_need_resched(struct task_struct *tsk)
{
 set_tsk_thread_flag(tsk,3);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_tsk_need_resched(struct task_struct *tsk)
{
 clear_tsk_thread_flag(tsk,3);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int test_tsk_need_resched(struct task_struct *tsk)
{
 return __builtin_expect(!!(test_tsk_thread_flag(tsk,3)), 0);
}
# 1994 "../include/linux/sched.h"
extern int __cond_resched(void);
# 2019 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int _cond_resched(void)
{
 klp_sched_try_switch();
 return __cond_resched();
}
# 2042 "../include/linux/sched.h"
extern int __cond_resched_lock(spinlock_t *lock);
extern int __cond_resched_rwlock_read(rwlock_t *lock);
extern int __cond_resched_rwlock_write(rwlock_t *lock);
# 2080 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool need_resched(void)
{
 return __builtin_expect(!!(tif_need_resched()), 0);
}
# 2099 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int task_cpu(const struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_task_cpu(struct task_struct *p, unsigned int cpu)
{
}



extern bool sched_task_on_rq(struct task_struct *p);
extern unsigned long get_wchan(struct task_struct *p);
extern struct task_struct *cpu_curr_snapshot(int cpu);
# 2125 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vcpu_is_preempted(int cpu)
{
 return false;
}


extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
# 2159 "../include/linux/sched.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_core_free(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_core_fork(struct task_struct *p) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sched_core_idle_cpu(int cpu) { return idle_cpu(cpu); }


extern void sched_set_stop_task(int cpu, struct task_struct *stop);
# 13 "../include/linux/percpu.h" 2

# 1 "./arch/hexagon/include/generated/asm/percpu.h" 1
# 15 "../include/linux/percpu.h" 2
# 75 "../include/linux/percpu.h"
extern void *pcpu_base_addr;
extern const unsigned long *pcpu_unit_offsets;

struct pcpu_group_info {
 int nr_units;
 unsigned long base_offset;
 unsigned int *cpu_map;

};

struct pcpu_alloc_info {
 size_t static_size;
 size_t reserved_size;
 size_t dyn_size;
 size_t unit_size;
 size_t atom_size;
 size_t alloc_size;
 size_t __ai_size;
 int nr_groups;
 struct pcpu_group_info groups[];
};

enum pcpu_fc {
 PCPU_FC_AUTO,
 PCPU_FC_EMBED,
 PCPU_FC_PAGE,

 PCPU_FC_NR,
};
extern const char * const pcpu_fc_names[PCPU_FC_NR];

extern enum pcpu_fc pcpu_chosen_fc;

typedef int (pcpu_fc_cpu_to_node_fn_t)(int cpu);
typedef int (pcpu_fc_cpu_distance_fn_t)(unsigned int from, unsigned int to);

extern struct pcpu_alloc_info * __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pcpu_alloc_alloc_info(int nr_groups,
            int nr_units);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pcpu_free_alloc_info(struct pcpu_alloc_info *ai);

extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
      void *base_addr);

extern int __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
    size_t atom_size,
    pcpu_fc_cpu_distance_fn_t cpu_distance_fn,
    pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn);







extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr);
extern bool is_kernel_percpu_address(unsigned long addr);


extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) setup_per_cpu_areas(void);


extern void *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
       gfp_t gfp) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));
extern size_t pcpu_alloc_size(void *__pdata);
# 157 "../include/linux/percpu.h"
extern void free_percpu(void *__pdata);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_free_percpu(void *p) { void * _T = *(void * *)p; free_percpu(_T); }

extern phys_addr_t per_cpu_ptr_to_phys(void *addr);

extern unsigned long pcpu_nr_pages(void);
# 15 "../include/linux/percpu_counter.h" 2
# 135 "../include/linux/percpu_counter.h"
struct percpu_counter {
 s64 count;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int percpu_counter_init_many(struct percpu_counter *fbc,
        s64 amount, gfp_t gfp,
        u32 nr_counters)
{
 u32 i;

 for (i = 0; i < nr_counters; i++)
  fbc[i].count = amount;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int percpu_counter_init(struct percpu_counter *fbc, s64 amount,
          gfp_t gfp)
{
 return percpu_counter_init_many(fbc, amount, gfp, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_destroy_many(struct percpu_counter *fbc,
            u32 nr_counters)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_destroy(struct percpu_counter *fbc)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
{
 fbc->count = amount;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
{
 if (fbc->count > rhs)
  return 1;
 else if (fbc->count < rhs)
  return -1;
 else
  return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
__percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
{
 return percpu_counter_compare(fbc, rhs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
percpu_counter_add(struct percpu_counter *fbc, s64 amount)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 fbc->count += amount;
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64 amount)
{
 unsigned long flags;
 bool good = false;
 s64 count;

 if (amount == 0)
  return true;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 count = fbc->count + amount;
 if ((amount > 0 && count <= limit) ||
     (amount < 0 && count >= limit)) {
  fbc->count = count;
  good = true;
 }
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
 return good;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
{
 percpu_counter_add(fbc, amount);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
{
 percpu_counter_add(fbc, amount);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 percpu_counter_read(struct percpu_counter *fbc)
{
 return fbc->count;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 percpu_counter_read_positive(struct percpu_counter *fbc)
{
 return fbc->count;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 percpu_counter_sum_positive(struct percpu_counter *fbc)
{
 return percpu_counter_read_positive(fbc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 percpu_counter_sum(struct percpu_counter *fbc)
{
 return percpu_counter_read(fbc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_counter_initialized(struct percpu_counter *fbc)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_sync(struct percpu_counter *fbc)
{
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_inc(struct percpu_counter *fbc)
{
 percpu_counter_add(fbc, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_dec(struct percpu_counter *fbc)
{
 percpu_counter_add(fbc, -1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_counter_sub(struct percpu_counter *fbc, s64 amount)
{
 percpu_counter_add(fbc, -amount);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
percpu_counter_sub_local(struct percpu_counter *fbc, s64 amount)
{
 percpu_counter_add_local(fbc, -amount);
}
# 22 "../include/linux/mm_types.h" 2

# 1 "../arch/hexagon/include/asm/mmu.h" 1








# 1 "../arch/hexagon/include/asm/vdso.h" 1
# 13 "../arch/hexagon/include/asm/vdso.h"
struct hexagon_vdso {
 u32 rt_signal_trampoline[2];
};
# 10 "../arch/hexagon/include/asm/mmu.h" 2






struct mm_context {
 unsigned long long generation;
 unsigned long ptbase;
 struct hexagon_vdso *vdso;
};

typedef struct mm_context mm_context_t;
# 24 "../include/linux/mm_types.h" 2








struct address_space;
struct mem_cgroup;
# 72 "../include/linux/mm_types.h"
struct page {
 unsigned long flags;







 union {
  struct {





   union {
    struct list_head lru;


    struct {

     void *__filler;

     unsigned int mlock_count;
    };


    struct list_head buddy_list;
    struct list_head pcp_list;
   };

   struct address_space *mapping;
   union {
    unsigned long index;
    unsigned long share;
   };






   unsigned long private;
  };
  struct {




   unsigned long pp_magic;
   struct page_pool *pp;
   unsigned long _pp_mapping_pad;
   unsigned long dma_addr;
   atomic_long_t pp_ref_count;
  };
  struct {
   unsigned long compound_head;
  };
  struct {

   struct dev_pagemap *pgmap;
   void *zone_device_data;
# 145 "../include/linux/mm_types.h"
  };


  struct callback_head callback_head;
 };

 union {
# 166 "../include/linux/mm_types.h"
  unsigned int page_type;
# 177 "../include/linux/mm_types.h"
  atomic_t _mapcount;
 };


 atomic_t _refcount;
# 219 "../include/linux/mm_types.h"
} __attribute__((__aligned__(sizeof(unsigned long))));
# 235 "../include/linux/mm_types.h"
struct encoded_page;
# 251 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct encoded_page *encode_page(struct page *page, unsigned long flags)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_82(void) __attribute__((__error__("BUILD_BUG_ON failed: " "flags > ENCODED_PAGE_BITS"))); if (!(!(flags > 3ul))) __compiletime_assert_82(); } while (0);
 return (struct encoded_page *)(flags | (unsigned long)page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long encoded_page_flags(struct encoded_page *page)
{
 return 3ul & (unsigned long)page;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *encoded_page_ptr(struct encoded_page *page)
{
 return (struct page *)(~3ul & (unsigned long)page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct encoded_page *encode_nr_pages(unsigned long nr)
{
 (void)({ bool __ret_do_once = !!((nr << 2) >> 2 != nr); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/mm_types.h", 269, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return (struct encoded_page *)(nr << 2);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long encoded_nr_pages(struct encoded_page *page)
{
 return ((unsigned long)page) >> 2;
}





typedef struct {
 unsigned long val;
} swp_entry_t;
# 324 "../include/linux/mm_types.h"
struct folio {

 union {
  struct {

   unsigned long flags;
   union {
    struct list_head lru;

    struct {
     void *__filler;

     unsigned int mlock_count;

    };

   };
   struct address_space *mapping;
   unsigned long index;
   union {
    void *private;
    swp_entry_t swap;
   };
   atomic_t _mapcount;
   atomic_t _refcount;
# 361 "../include/linux/mm_types.h"
  };
  struct page page;
 };
 union {
  struct {
   unsigned long _flags_1;
   unsigned long _head_1;

   atomic_t _large_mapcount;
   atomic_t _entire_mapcount;
   atomic_t _nr_pages_mapped;
   atomic_t _pincount;




  };
  struct page __page_1;
 };
 union {
  struct {
   unsigned long _flags_2;
   unsigned long _head_2;

   void *_hugetlb_subpool;
   void *_hugetlb_cgroup;
   void *_hugetlb_cgroup_rsvd;
   void *_hugetlb_hwpoison;

  };
  struct {
   unsigned long _flags_2a;
   unsigned long _head_2a;

   struct list_head _deferred_list;

  };
  struct page __page_2;
 };
};



_Static_assert(__builtin_offsetof(struct page, flags) == __builtin_offsetof(struct folio, flags), "offsetof(struct page, flags) == offsetof(struct folio, flags)");
_Static_assert(__builtin_offsetof(struct page, lru) == __builtin_offsetof(struct folio, lru), "offsetof(struct page, lru) == offsetof(struct folio, lru)");
_Static_assert(__builtin_offsetof(struct page, mapping) == __builtin_offsetof(struct folio, mapping), "offsetof(struct page, mapping) == offsetof(struct folio, mapping)");
_Static_assert(__builtin_offsetof(struct page, compound_head) == __builtin_offsetof(struct folio, lru), "offsetof(struct page, compound_head) == offsetof(struct folio, lru)");
_Static_assert(__builtin_offsetof(struct page, index) == __builtin_offsetof(struct folio, index), "offsetof(struct page, index) == offsetof(struct folio, index)");
_Static_assert(__builtin_offsetof(struct page, private) == __builtin_offsetof(struct folio, private), "offsetof(struct page, private) == offsetof(struct folio, private)");
_Static_assert(__builtin_offsetof(struct page, _mapcount) == __builtin_offsetof(struct folio, _mapcount), "offsetof(struct page, _mapcount) == offsetof(struct folio, _mapcount)");
_Static_assert(__builtin_offsetof(struct page, _refcount) == __builtin_offsetof(struct folio, _refcount), "offsetof(struct page, _refcount) == offsetof(struct folio, _refcount)");
# 425 "../include/linux/mm_types.h"
_Static_assert(__builtin_offsetof(struct folio, _flags_1) == __builtin_offsetof(struct page, flags) + sizeof(struct page), "offsetof(struct folio, _flags_1) == offsetof(struct page, flags) + sizeof(struct page)");
_Static_assert(__builtin_offsetof(struct folio, _head_1) == __builtin_offsetof(struct page, compound_head) + sizeof(struct page), "offsetof(struct folio, _head_1) == offsetof(struct page, compound_head) + sizeof(struct page)");




_Static_assert(__builtin_offsetof(struct folio, _flags_2) == __builtin_offsetof(struct page, flags) + 2 * sizeof(struct page), "offsetof(struct folio, _flags_2) == offsetof(struct page, flags) + 2 * sizeof(struct page)");
_Static_assert(__builtin_offsetof(struct folio, _head_2) == __builtin_offsetof(struct page, compound_head) + 2 * sizeof(struct page), "offsetof(struct folio, _head_2) == offsetof(struct page, compound_head) + 2 * sizeof(struct page)");
_Static_assert(__builtin_offsetof(struct folio, _flags_2a) == __builtin_offsetof(struct page, flags) + 2 * sizeof(struct page), "offsetof(struct folio, _flags_2a) == offsetof(struct page, flags) + 2 * sizeof(struct page)");
_Static_assert(__builtin_offsetof(struct folio, _head_2a) == __builtin_offsetof(struct page, compound_head) + 2 * sizeof(struct page), "offsetof(struct folio, _head_2a) == offsetof(struct page, compound_head) + 2 * sizeof(struct page)");
# 457 "../include/linux/mm_types.h"
struct ptdesc {
 unsigned long __page_flags;

 union {
  struct callback_head pt_rcu_head;
  struct list_head pt_list;
  struct {
   unsigned long _pt_pad_1;
   pgtable_t pmd_huge_pte;
  };
 };
 unsigned long __page_mapping;

 union {
  unsigned long pt_index;
  struct mm_struct *pt_mm;
  atomic_t pt_frag_refcount;
 };

 union {
  unsigned long _pt_pad_2;

  spinlock_t *ptl;



 };
 unsigned int __page_type;
 atomic_t __page_refcount;



};



_Static_assert(__builtin_offsetof(struct page, flags) == __builtin_offsetof(struct ptdesc, __page_flags), "offsetof(struct page, flags) == offsetof(struct ptdesc, __page_flags)");
_Static_assert(__builtin_offsetof(struct page, compound_head) == __builtin_offsetof(struct ptdesc, pt_list), "offsetof(struct page, compound_head) == offsetof(struct ptdesc, pt_list)");
_Static_assert(__builtin_offsetof(struct page, compound_head) == __builtin_offsetof(struct ptdesc, _pt_pad_1), "offsetof(struct page, compound_head) == offsetof(struct ptdesc, _pt_pad_1)");
_Static_assert(__builtin_offsetof(struct page, mapping) == __builtin_offsetof(struct ptdesc, __page_mapping), "offsetof(struct page, mapping) == offsetof(struct ptdesc, __page_mapping)");
_Static_assert(__builtin_offsetof(struct page, index) == __builtin_offsetof(struct ptdesc, pt_index), "offsetof(struct page, index) == offsetof(struct ptdesc, pt_index)");
_Static_assert(__builtin_offsetof(struct page, callback_head) == __builtin_offsetof(struct ptdesc, pt_rcu_head), "offsetof(struct page, callback_head) == offsetof(struct ptdesc, pt_rcu_head)");
_Static_assert(__builtin_offsetof(struct page, page_type) == __builtin_offsetof(struct ptdesc, __page_type), "offsetof(struct page, page_type) == offsetof(struct ptdesc, __page_type)");
_Static_assert(__builtin_offsetof(struct page, _refcount) == __builtin_offsetof(struct ptdesc, __page_refcount), "offsetof(struct page, _refcount) == offsetof(struct ptdesc, __page_refcount)");




_Static_assert(sizeof(struct ptdesc) <= sizeof(struct page), "sizeof(struct ptdesc) <= sizeof(struct page)");
# 535 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_private(struct page *page, unsigned long private)
{
 page->private = private;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *folio_get_private(struct folio *folio)
{
 return folio->private;
}

struct page_frag_cache {
 void * va;

 __u16 offset;
 __u16 size;






 unsigned int pagecnt_bias;
 bool pfmemalloc;
};

typedef unsigned long vm_flags_t;






struct vm_region {
 struct rb_node vm_rb;
 vm_flags_t vm_flags;
 unsigned long vm_start;
 unsigned long vm_end;
 unsigned long vm_top;
 unsigned long vm_pgoff;
 struct file *vm_file;

 int vm_usage;
 bool vm_icache_flushed : 1;

};



struct vm_userfaultfd_ctx {
 struct userfaultfd_ctx *ctx;
};





struct anon_vma_name {
 struct kref kref;

 char name[];
};
# 607 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct anon_vma_name *anon_vma_name_alloc(const char *name)
{
 return ((void *)0);
}


struct vma_lock {
 struct rw_semaphore lock;
};

struct vma_numab_state {





 unsigned long next_scan;





 unsigned long pids_active_reset;
# 646 "../include/linux/mm_types.h"
 unsigned long pids_active[2];


 int start_scan_seq;





 int prev_scan_seq;
};







struct vm_area_struct {


 union {
  struct {

   unsigned long vm_start;
   unsigned long vm_end;
  };



 };

 struct mm_struct *vm_mm;
 pgprot_t vm_page_prot;





 union {
  const vm_flags_t vm_flags;
  vm_flags_t __vm_flags;
 };
# 717 "../include/linux/mm_types.h"
 struct {
  struct rb_node rb;
  unsigned long rb_subtree_last;
 } shared;







 struct list_head anon_vma_chain;

 struct anon_vma *anon_vma;


 const struct vm_operations_struct *vm_ops;


 unsigned long vm_pgoff;

 struct file * vm_file;
 void * vm_private_data;
# 761 "../include/linux/mm_types.h"
 struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} ;
# 777 "../include/linux/mm_types.h"
struct kioctx_table;
struct iommu_mm_data;
struct mm_struct {
 struct {




  struct {







   atomic_t mm_count;
  } ;

  struct maple_tree mm_mt;

  unsigned long mmap_base;
  unsigned long mmap_legacy_base;





  unsigned long task_size;
  pgd_t * pgd;
# 815 "../include/linux/mm_types.h"
  atomic_t membarrier_state;
# 827 "../include/linux/mm_types.h"
  atomic_t mm_users;
# 846 "../include/linux/mm_types.h"
  atomic_long_t pgtables_bytes;

  int map_count;

  spinlock_t page_table_lock;
# 865 "../include/linux/mm_types.h"
  struct rw_semaphore mmap_lock;

  struct list_head mmlist;
# 891 "../include/linux/mm_types.h"
  unsigned long hiwater_rss;
  unsigned long hiwater_vm;

  unsigned long total_vm;
  unsigned long locked_vm;
  atomic64_t pinned_vm;
  unsigned long data_vm;
  unsigned long exec_vm;
  unsigned long stack_vm;
  unsigned long def_flags;






  seqcount_t write_protect_seq;

  spinlock_t arg_lock;

  unsigned long start_code, end_code, start_data, end_data;
  unsigned long start_brk, brk, start_stack;
  unsigned long arg_start, arg_end, env_start, env_end;

  unsigned long saved_auxv[(2*(0 + 22 + 1))];

  struct percpu_counter rss_stat[NR_MM_COUNTERS];

  struct linux_binfmt *binfmt;


  mm_context_t context;

  unsigned long flags;


  spinlock_t ioctx_lock;
  struct kioctx_table *ioctx_table;
# 943 "../include/linux/mm_types.h"
  struct user_namespace *user_ns;


  struct file *exe_file;
# 972 "../include/linux/mm_types.h"
  atomic_t tlb_flush_pending;




  struct uprobes_state uprobes_state;






  struct work_struct async_put_work;
# 1022 "../include/linux/mm_types.h"
 } ;





 unsigned long cpu_bitmap[];
};



extern struct mm_struct init_mm;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_init_cpumask(struct mm_struct *mm)
{
 unsigned long cpu_bitmap = (unsigned long)mm;

 cpu_bitmap += __builtin_offsetof(struct mm_struct, cpu_bitmap);
 cpumask_clear((struct cpumask *)cpu_bitmap);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) cpumask_t *mm_cpumask(struct mm_struct *mm)
{
 return (struct cpumask *)&mm->cpu_bitmap;
}



struct lru_gen_mm_list {

 struct list_head fifo;

 spinlock_t lock;
};
# 1088 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lru_gen_add_mm(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lru_gen_del_mm(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lru_gen_migrate_mm(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lru_gen_init_mm(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lru_gen_use_mm(struct mm_struct *mm)
{
}



struct vma_iterator {
 struct ma_state mas;
};
# 1124 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_iter_init(struct vma_iterator *vmi,
  struct mm_struct *mm, unsigned long addr)
{
 mas_init(&vmi->mas, &mm->mm_mt, addr);
}
# 1207 "../include/linux/mm_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_init_cid(struct mm_struct *mm) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mm_alloc_cid(struct mm_struct *mm) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_destroy_cid(struct mm_struct *mm) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int mm_cid_size(void)
{
 return 0;
}


struct mmu_gather;
extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm);
extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm);
extern void tlb_finish_mmu(struct mmu_gather *tlb);

struct vm_fault;






typedef unsigned int vm_fault_t;
# 1255 "../include/linux/mm_types.h"
enum vm_fault_reason {
 VM_FAULT_OOM = ( vm_fault_t)0x000001,
 VM_FAULT_SIGBUS = ( vm_fault_t)0x000002,
 VM_FAULT_MAJOR = ( vm_fault_t)0x000004,
 VM_FAULT_HWPOISON = ( vm_fault_t)0x000010,
 VM_FAULT_HWPOISON_LARGE = ( vm_fault_t)0x000020,
 VM_FAULT_SIGSEGV = ( vm_fault_t)0x000040,
 VM_FAULT_NOPAGE = ( vm_fault_t)0x000100,
 VM_FAULT_LOCKED = ( vm_fault_t)0x000200,
 VM_FAULT_RETRY = ( vm_fault_t)0x000400,
 VM_FAULT_FALLBACK = ( vm_fault_t)0x000800,
 VM_FAULT_DONE_COW = ( vm_fault_t)0x001000,
 VM_FAULT_NEEDDSYNC = ( vm_fault_t)0x002000,
 VM_FAULT_COMPLETED = ( vm_fault_t)0x004000,
 VM_FAULT_HINDEX_MASK = ( vm_fault_t)0x0f0000,
};
# 1295 "../include/linux/mm_types.h"
struct vm_special_mapping {
 const char *name;







 struct page **pages;





 vm_fault_t (*fault)(const struct vm_special_mapping *sm,
    struct vm_area_struct *vma,
    struct vm_fault *vmf);

 int (*mremap)(const struct vm_special_mapping *sm,
       struct vm_area_struct *new_vma);
};

enum tlb_flush_reason {
 TLB_FLUSH_ON_TASK_SWITCH,
 TLB_REMOTE_SHOOTDOWN,
 TLB_LOCAL_SHOOTDOWN,
 TLB_LOCAL_MM_SHOOTDOWN,
 TLB_REMOTE_SEND_IPI,
 NR_TLB_FLUSH_REASONS,
};
# 1369 "../include/linux/mm_types.h"
enum fault_flag {
 FAULT_FLAG_WRITE = 1 << 0,
 FAULT_FLAG_MKWRITE = 1 << 1,
 FAULT_FLAG_ALLOW_RETRY = 1 << 2,
 FAULT_FLAG_RETRY_NOWAIT = 1 << 3,
 FAULT_FLAG_KILLABLE = 1 << 4,
 FAULT_FLAG_TRIED = 1 << 5,
 FAULT_FLAG_USER = 1 << 6,
 FAULT_FLAG_REMOTE = 1 << 7,
 FAULT_FLAG_INSTRUCTION = 1 << 8,
 FAULT_FLAG_INTERRUPTIBLE = 1 << 9,
 FAULT_FLAG_UNSHARE = 1 << 10,
 FAULT_FLAG_ORIG_PTE_VALID = 1 << 11,
 FAULT_FLAG_VMA_LOCK = 1 << 12,
};

typedef unsigned int zap_flags_t;


typedef int cydp_t;
# 1443 "../include/linux/mm_types.h"
enum {

 FOLL_WRITE = 1 << 0,

 FOLL_GET = 1 << 1,

 FOLL_DUMP = 1 << 2,

 FOLL_FORCE = 1 << 3,




 FOLL_NOWAIT = 1 << 4,

 FOLL_NOFAULT = 1 << 5,

 FOLL_HWPOISON = 1 << 6,

 FOLL_ANON = 1 << 7,





 FOLL_LONGTERM = 1 << 8,

 FOLL_SPLIT_PMD = 1 << 9,

 FOLL_PCI_P2PDMA = 1 << 10,

 FOLL_INTERRUPTIBLE = 1 << 11,
# 1483 "../include/linux/mm_types.h"
 FOLL_HONOR_NUMA_FAULT = 1 << 12,


};
# 23 "../include/linux/mmzone.h" 2
# 1 "../include/linux/page-flags.h" 1
# 95 "../include/linux/page-flags.h"
enum pageflags {
 PG_locked,
 PG_writeback,
 PG_referenced,
 PG_uptodate,
 PG_dirty,
 PG_lru,
 PG_head,
 PG_waiters,
 PG_active,
 PG_workingset,
 PG_error,
 PG_owner_priv_1,
 PG_arch_1,
 PG_reserved,
 PG_private,
 PG_private_2,
 PG_mappedtodisk,
 PG_reclaim,
 PG_swapbacked,
 PG_unevictable,

 PG_mlocked,
# 133 "../include/linux/page-flags.h"
 __NR_PAGEFLAGS,

 PG_readahead = PG_reclaim,
# 144 "../include/linux/page-flags.h"
 PG_anon_exclusive = PG_mappedtodisk,


 PG_checked = PG_owner_priv_1,


 PG_swapcache = PG_owner_priv_1,





 PG_fscache = PG_private_2,



 PG_pinned = PG_owner_priv_1,

 PG_savepinned = PG_dirty,

 PG_foreign = PG_owner_priv_1,

 PG_xen_remapped = PG_owner_priv_1,


 PG_isolated = PG_reclaim,


 PG_reported = PG_uptodate,
# 186 "../include/linux/page-flags.h"
 PG_has_hwpoisoned = PG_error,
 PG_large_rmappable = PG_workingset,
};
# 227 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct page *page_fixed_fake_head(const struct page *page)
{
 return page;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int page_is_fake_head(const struct page *page)
{
 return page_fixed_fake_head(page) != page;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long _compound_head(const struct page *page)
{
 unsigned long head = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_83(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->compound_head) == sizeof(char) || sizeof(page->compound_head) == sizeof(short) || sizeof(page->compound_head) == sizeof(int) || sizeof(page->compound_head) == sizeof(long)) || sizeof(page->compound_head) == sizeof(long long))) __compiletime_assert_83(); } while (0); (*(const volatile typeof( _Generic((page->compound_head), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->compound_head))) *)&(page->compound_head)); });

 if (__builtin_expect(!!(head & 1), 0))
  return head - 1;
 return (unsigned long)page_fixed_fake_head(page);
}
# 277 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageTail(const struct page *page)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_84(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->compound_head) == sizeof(char) || sizeof(page->compound_head) == sizeof(short) || sizeof(page->compound_head) == sizeof(int) || sizeof(page->compound_head) == sizeof(long)) || sizeof(page->compound_head) == sizeof(long long))) __compiletime_assert_84(); } while (0); (*(const volatile typeof( _Generic((page->compound_head), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->compound_head))) *)&(page->compound_head)); }) & 1 || page_is_fake_head(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageCompound(const struct page *page)
{
 return ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&page->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&page->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&page->flags))) ? const_test_bit(PG_head, &page->flags) : arch_test_bit(PG_head, &page->flags)) ||
        ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_85(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->compound_head) == sizeof(char) || sizeof(page->compound_head) == sizeof(short) || sizeof(page->compound_head) == sizeof(int) || sizeof(page->compound_head) == sizeof(long)) || sizeof(page->compound_head) == sizeof(long long))) __compiletime_assert_85(); } while (0); (*(const volatile typeof( _Generic((page->compound_head), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->compound_head))) *)&(page->compound_head)); }) & 1;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PagePoisoned(const struct page *page)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_86(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->flags) == sizeof(char) || sizeof(page->flags) == sizeof(short) || sizeof(page->flags) == sizeof(int) || sizeof(page->flags) == sizeof(long)) || sizeof(page->flags) == sizeof(long long))) __compiletime_assert_86(); } while (0); (*(const volatile typeof( _Generic((page->flags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->flags))) *)&(page->flags)); }) == -1l;
}


void page_init_poison(struct page *page, size_t size);






static const unsigned long *const_folio_flags(const struct folio *folio,
  unsigned n)
{
 const struct page *page = &folio->page;

 ((void)(sizeof(( long)(PageTail(page)))));
 ((void)(sizeof(( long)(n > 0 && !((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&page->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&page->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&page->flags))) ? const_test_bit(PG_head, &page->flags) : arch_test_bit(PG_head, &page->flags))))));
 return &page[n].flags;
}

static unsigned long *folio_flags(struct folio *folio, unsigned n)
{
 struct page *page = &folio->page;

 ((void)(sizeof(( long)(PageTail(page)))));
 ((void)(sizeof(( long)(n > 0 && !((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&page->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&page->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&page->flags))) ? const_test_bit(PG_head, &page->flags) : arch_test_bit(PG_head, &page->flags))))));
 return &page[n].flags;
}
# 507 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_locked(const struct folio *folio) { return ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_locked, const_folio_flags(folio, 0)) : arch_test_bit(PG_locked, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageLocked(const struct page *page) { return ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_locked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_locked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_locked(struct folio *folio) { ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_locked, folio_flags(folio, 0)) : arch___set_bit(PG_locked, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageLocked(struct page *page) { ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? generic___set_bit(PG_locked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch___set_bit(PG_locked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_locked(struct folio *folio) { ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_locked, folio_flags(folio, 0)) : arch___clear_bit(PG_locked, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageLocked(struct page *page) { ((__builtin_constant_p(PG_locked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? generic___clear_bit(PG_locked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch___clear_bit(PG_locked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_waiters(const struct folio *folio) { return ((__builtin_constant_p(PG_waiters) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_waiters, const_folio_flags(folio, 0)) : arch_test_bit(PG_waiters, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_waiters(struct folio *folio) { set_bit(PG_waiters, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_waiters(struct folio *folio) { clear_bit(PG_waiters, folio_flags(folio, 0)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_error(const struct folio *folio) { return ((__builtin_constant_p(PG_error) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_error, const_folio_flags(folio, 0)) : arch_test_bit(PG_error, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageError(const struct page *page) { return ((__builtin_constant_p(PG_error) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_error, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_error, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_error(struct folio *folio) { set_bit(PG_error, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageError(struct page *page) { set_bit(PG_error, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_error(struct folio *folio) { clear_bit(PG_error, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageError(struct page *page) { clear_bit(PG_error, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_error(struct folio *folio) { return test_and_clear_bit(PG_error, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageError(struct page *page) { return test_and_clear_bit(PG_error, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_referenced(const struct folio *folio) { return ((__builtin_constant_p(PG_referenced) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_referenced, const_folio_flags(folio, 0)) : arch_test_bit(PG_referenced, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_referenced(struct folio *folio) { set_bit(PG_referenced, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_referenced(struct folio *folio) { clear_bit(PG_referenced, folio_flags(folio, 0)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_referenced(struct folio *folio) { return test_and_clear_bit(PG_referenced, folio_flags(folio, 0)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_referenced(struct folio *folio) { ((__builtin_constant_p(PG_referenced) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_referenced, folio_flags(folio, 0)) : arch___set_bit(PG_referenced, folio_flags(folio, 0))); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_dirty(const struct folio *folio) { return ((__builtin_constant_p(PG_dirty) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_dirty, const_folio_flags(folio, 0)) : arch_test_bit(PG_dirty, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageDirty(const struct page *page) { return ((__builtin_constant_p(PG_dirty) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? const_test_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch_test_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_dirty(struct folio *folio) { set_bit(PG_dirty, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageDirty(struct page *page) { set_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_dirty(struct folio *folio) { clear_bit(PG_dirty, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageDirty(struct page *page) { clear_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_set_dirty(struct folio *folio) { return test_and_set_bit(PG_dirty, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestSetPageDirty(struct page *page) { return test_and_set_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_dirty(struct folio *folio) { return test_and_clear_bit(PG_dirty, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageDirty(struct page *page) { return test_and_clear_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_dirty(struct folio *folio) { ((__builtin_constant_p(PG_dirty) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_dirty, folio_flags(folio, 0)) : arch___clear_bit(PG_dirty, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageDirty(struct page *page) { ((__builtin_constant_p(PG_dirty) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? generic___clear_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch___clear_bit(PG_dirty, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_lru(const struct folio *folio) { return ((__builtin_constant_p(PG_lru) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_lru, const_folio_flags(folio, 0)) : arch_test_bit(PG_lru, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageLRU(const struct page *page) { return ((__builtin_constant_p(PG_lru) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? const_test_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch_test_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_lru(struct folio *folio) { set_bit(PG_lru, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageLRU(struct page *page) { set_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_lru(struct folio *folio) { clear_bit(PG_lru, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageLRU(struct page *page) { clear_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_lru(struct folio *folio) { ((__builtin_constant_p(PG_lru) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_lru, folio_flags(folio, 0)) : arch___clear_bit(PG_lru, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageLRU(struct page *page) { ((__builtin_constant_p(PG_lru) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? generic___clear_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch___clear_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_lru(struct folio *folio) { return test_and_clear_bit(PG_lru, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageLRU(struct page *page) { return test_and_clear_bit(PG_lru, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_active(const struct folio *folio) { return ((__builtin_constant_p(PG_active) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_active, const_folio_flags(folio, 0)) : arch_test_bit(PG_active, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageActive(const struct page *page) { return ((__builtin_constant_p(PG_active) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? const_test_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch_test_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_active(struct folio *folio) { set_bit(PG_active, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageActive(struct page *page) { set_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_active(struct folio *folio) { clear_bit(PG_active, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageActive(struct page *page) { clear_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_active(struct folio *folio) { ((__builtin_constant_p(PG_active) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_active, folio_flags(folio, 0)) : arch___clear_bit(PG_active, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageActive(struct page *page) { ((__builtin_constant_p(PG_active) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? generic___clear_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch___clear_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_active(struct folio *folio) { return test_and_clear_bit(PG_active, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageActive(struct page *page) { return test_and_clear_bit(PG_active, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_workingset(const struct folio *folio) { return ((__builtin_constant_p(PG_workingset) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_workingset, const_folio_flags(folio, 0)) : arch_test_bit(PG_workingset, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageWorkingset(const struct page *page) { return ((__builtin_constant_p(PG_workingset) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? const_test_bit(PG_workingset, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch_test_bit(PG_workingset, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_workingset(struct folio *folio) { set_bit(PG_workingset, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageWorkingset(struct page *page) { set_bit(PG_workingset, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_workingset(struct folio *folio) { clear_bit(PG_workingset, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageWorkingset(struct page *page) { clear_bit(PG_workingset, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_workingset(struct folio *folio) { return test_and_clear_bit(PG_workingset, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageWorkingset(struct page *page) { return test_and_clear_bit(PG_workingset, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_checked(const struct folio *folio) { return ((__builtin_constant_p(PG_checked) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_checked, const_folio_flags(folio, 0)) : arch_test_bit(PG_checked, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageChecked(const struct page *page) { return ((__builtin_constant_p(PG_checked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_checked, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_checked, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_checked(struct folio *folio) { set_bit(PG_checked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageChecked(struct page *page) { set_bit(PG_checked, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_checked(struct folio *folio) { clear_bit(PG_checked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageChecked(struct page *page) { clear_bit(PG_checked, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_pinned(const struct folio *folio) { return ((__builtin_constant_p(PG_pinned) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_pinned, const_folio_flags(folio, 0)) : arch_test_bit(PG_pinned, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PagePinned(const struct page *page) { return ((__builtin_constant_p(PG_pinned) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_pinned, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_pinned, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_pinned(struct folio *folio) { set_bit(PG_pinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPagePinned(struct page *page) { set_bit(PG_pinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_pinned(struct folio *folio) { clear_bit(PG_pinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPagePinned(struct page *page) { clear_bit(PG_pinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_set_pinned(struct folio *folio) { return test_and_set_bit(PG_pinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestSetPagePinned(struct page *page) { return test_and_set_bit(PG_pinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_pinned(struct folio *folio) { return test_and_clear_bit(PG_pinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPagePinned(struct page *page) { return test_and_clear_bit(PG_pinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_savepinned(const struct folio *folio) { return ((__builtin_constant_p(PG_savepinned) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_savepinned, const_folio_flags(folio, 0)) : arch_test_bit(PG_savepinned, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageSavePinned(const struct page *page) { return ((__builtin_constant_p(PG_savepinned) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_savepinned, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_savepinned, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_savepinned(struct folio *folio) { set_bit(PG_savepinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageSavePinned(struct page *page) { set_bit(PG_savepinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_savepinned(struct folio *folio) { clear_bit(PG_savepinned, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageSavePinned(struct page *page) { clear_bit(PG_savepinned, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); };
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_foreign(const struct folio *folio) { return ((__builtin_constant_p(PG_foreign) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_foreign, const_folio_flags(folio, 0)) : arch_test_bit(PG_foreign, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageForeign(const struct page *page) { return ((__builtin_constant_p(PG_foreign) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_foreign, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_foreign, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_foreign(struct folio *folio) { set_bit(PG_foreign, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageForeign(struct page *page) { set_bit(PG_foreign, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_foreign(struct folio *folio) { clear_bit(PG_foreign, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageForeign(struct page *page) { clear_bit(PG_foreign, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); };
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_xen_remapped(const struct folio *folio) { return ((__builtin_constant_p(PG_xen_remapped) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_xen_remapped, const_folio_flags(folio, 0)) : arch_test_bit(PG_xen_remapped, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageXenRemapped(const struct page *page) { return ((__builtin_constant_p(PG_xen_remapped) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_xen_remapped, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_xen_remapped, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_xen_remapped(struct folio *folio) { set_bit(PG_xen_remapped, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageXenRemapped(struct page *page) { set_bit(PG_xen_remapped, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_xen_remapped(struct folio *folio) { clear_bit(PG_xen_remapped, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageXenRemapped(struct page *page) { clear_bit(PG_xen_remapped, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_xen_remapped(struct folio *folio) { return test_and_clear_bit(PG_xen_remapped, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageXenRemapped(struct page *page) { return test_and_clear_bit(PG_xen_remapped, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_reserved(const struct folio *folio) { return ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_reserved, const_folio_flags(folio, 0)) : arch_test_bit(PG_reserved, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageReserved(const struct page *page) { return ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_reserved, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_reserved, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_reserved(struct folio *folio) { set_bit(PG_reserved, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageReserved(struct page *page) { set_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_reserved(struct folio *folio) { clear_bit(PG_reserved, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageReserved(struct page *page) { clear_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_reserved(struct folio *folio) { ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_reserved, folio_flags(folio, 0)) : arch___clear_bit(PG_reserved, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageReserved(struct page *page) { ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? generic___clear_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch___clear_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_reserved(struct folio *folio) { ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_reserved, folio_flags(folio, 0)) : arch___set_bit(PG_reserved, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageReserved(struct page *page) { ((__builtin_constant_p(PG_reserved) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? generic___set_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch___set_bit(PG_reserved, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_swapbacked(const struct folio *folio) { return ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_swapbacked, const_folio_flags(folio, 0)) : arch_test_bit(PG_swapbacked, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageSwapBacked(const struct page *page) { return ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_swapbacked(struct folio *folio) { set_bit(PG_swapbacked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageSwapBacked(struct page *page) { set_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_swapbacked(struct folio *folio) { clear_bit(PG_swapbacked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageSwapBacked(struct page *page) { clear_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_swapbacked(struct folio *folio) { ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_swapbacked, folio_flags(folio, 0)) : arch___clear_bit(PG_swapbacked, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageSwapBacked(struct page *page) { ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? generic___clear_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch___clear_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_swapbacked(struct folio *folio) { ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_swapbacked, folio_flags(folio, 0)) : arch___set_bit(PG_swapbacked, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageSwapBacked(struct page *page) { ((__builtin_constant_p(PG_swapbacked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? generic___set_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch___set_bit(PG_swapbacked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); }






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_private(const struct folio *folio) { return ((__builtin_constant_p(PG_private) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_private, const_folio_flags(folio, 0)) : arch_test_bit(PG_private, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PagePrivate(const struct page *page) { return ((__builtin_constant_p(PG_private) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? const_test_bit(PG_private, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch_test_bit(PG_private, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_private(struct folio *folio) { set_bit(PG_private, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPagePrivate(struct page *page) { set_bit(PG_private, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_private(struct folio *folio) { clear_bit(PG_private, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPagePrivate(struct page *page) { clear_bit(PG_private, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_private_2(const struct folio *folio) { return ((__builtin_constant_p(PG_private_2) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_private_2, const_folio_flags(folio, 0)) : arch_test_bit(PG_private_2, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PagePrivate2(const struct page *page) { return ((__builtin_constant_p(PG_private_2) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? const_test_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch_test_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_private_2(struct folio *folio) { set_bit(PG_private_2, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPagePrivate2(struct page *page) { set_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_private_2(struct folio *folio) { clear_bit(PG_private_2, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPagePrivate2(struct page *page) { clear_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_set_private_2(struct folio *folio) { return test_and_set_bit(PG_private_2, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestSetPagePrivate2(struct page *page) { return test_and_set_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_private_2(struct folio *folio) { return test_and_clear_bit(PG_private_2, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPagePrivate2(struct page *page) { return test_and_clear_bit(PG_private_2, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_owner_priv_1(const struct folio *folio) { return ((__builtin_constant_p(PG_owner_priv_1) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_owner_priv_1, const_folio_flags(folio, 0)) : arch_test_bit(PG_owner_priv_1, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageOwnerPriv1(const struct page *page) { return ((__builtin_constant_p(PG_owner_priv_1) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? const_test_bit(PG_owner_priv_1, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch_test_bit(PG_owner_priv_1, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_owner_priv_1(struct folio *folio) { set_bit(PG_owner_priv_1, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageOwnerPriv1(struct page *page) { set_bit(PG_owner_priv_1, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_owner_priv_1(struct folio *folio) { clear_bit(PG_owner_priv_1, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageOwnerPriv1(struct page *page) { clear_bit(PG_owner_priv_1, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_owner_priv_1(struct folio *folio) { return test_and_clear_bit(PG_owner_priv_1, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageOwnerPriv1(struct page *page) { return test_and_clear_bit(PG_owner_priv_1, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); }





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_writeback(const struct folio *folio) { return ((__builtin_constant_p(PG_writeback) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_writeback, const_folio_flags(folio, 0)) : arch_test_bit(PG_writeback, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageWriteback(const struct page *page) { return ((__builtin_constant_p(PG_writeback) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_writeback, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_writeback, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_set_writeback(struct folio *folio) { return test_and_set_bit(PG_writeback, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestSetPageWriteback(struct page *page) { return test_and_set_bit(PG_writeback, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_writeback(struct folio *folio) { return test_and_clear_bit(PG_writeback, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageWriteback(struct page *page) { return test_and_clear_bit(PG_writeback, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_mappedtodisk(const struct folio *folio) { return ((__builtin_constant_p(PG_mappedtodisk) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_mappedtodisk, const_folio_flags(folio, 0)) : arch_test_bit(PG_mappedtodisk, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageMappedToDisk(const struct page *page) { return ((__builtin_constant_p(PG_mappedtodisk) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_mappedtodisk, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_mappedtodisk, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_mappedtodisk(struct folio *folio) { set_bit(PG_mappedtodisk, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageMappedToDisk(struct page *page) { set_bit(PG_mappedtodisk, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_mappedtodisk(struct folio *folio) { clear_bit(PG_mappedtodisk, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageMappedToDisk(struct page *page) { clear_bit(PG_mappedtodisk, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_reclaim(const struct folio *folio) { return ((__builtin_constant_p(PG_reclaim) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_reclaim, const_folio_flags(folio, 0)) : arch_test_bit(PG_reclaim, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageReclaim(const struct page *page) { return ((__builtin_constant_p(PG_reclaim) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_reclaim, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_reclaim, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_reclaim(struct folio *folio) { set_bit(PG_reclaim, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageReclaim(struct page *page) { set_bit(PG_reclaim, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_reclaim(struct folio *folio) { clear_bit(PG_reclaim, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageReclaim(struct page *page) { clear_bit(PG_reclaim, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_reclaim(struct folio *folio) { return test_and_clear_bit(PG_reclaim, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageReclaim(struct page *page) { return test_and_clear_bit(PG_reclaim, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_readahead(const struct folio *folio) { return ((__builtin_constant_p(PG_readahead) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_readahead, const_folio_flags(folio, 0)) : arch_test_bit(PG_readahead, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageReadahead(const struct page *page) { return ((__builtin_constant_p(PG_readahead) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_readahead, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_readahead, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_readahead(struct folio *folio) { set_bit(PG_readahead, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageReadahead(struct page *page) { set_bit(PG_readahead, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_readahead(struct folio *folio) { clear_bit(PG_readahead, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageReadahead(struct page *page) { clear_bit(PG_readahead, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_readahead(struct folio *folio) { return test_and_clear_bit(PG_readahead, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageReadahead(struct page *page) { return test_and_clear_bit(PG_readahead, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags); }
# 570 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_highmem(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageHighMem(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_highmem(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageHighMem(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_highmem(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageHighMem(struct page *page) { }
# 588 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_swapcache(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageSwapCache(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_swapcache(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageSwapCache(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_swapcache(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageSwapCache(struct page *page) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_unevictable(const struct folio *folio) { return ((__builtin_constant_p(PG_unevictable) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_unevictable, const_folio_flags(folio, 0)) : arch_test_bit(PG_unevictable, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageUnevictable(const struct page *page) { return ((__builtin_constant_p(PG_unevictable) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? const_test_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch_test_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_unevictable(struct folio *folio) { set_bit(PG_unevictable, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageUnevictable(struct page *page) { set_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_unevictable(struct folio *folio) { clear_bit(PG_unevictable, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageUnevictable(struct page *page) { clear_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_unevictable(struct folio *folio) { ((__builtin_constant_p(PG_unevictable) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_unevictable, folio_flags(folio, 0)) : arch___clear_bit(PG_unevictable, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageUnevictable(struct page *page) { ((__builtin_constant_p(PG_unevictable) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags))) ? generic___clear_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags) : arch___clear_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_unevictable(struct folio *folio) { return test_and_clear_bit(PG_unevictable, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageUnevictable(struct page *page) { return test_and_clear_bit(PG_unevictable, &({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); })->flags); }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_mlocked(const struct folio *folio) { return ((__builtin_constant_p(PG_mlocked) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_mlocked, const_folio_flags(folio, 0)) : arch_test_bit(PG_mlocked, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageMlocked(const struct page *page) { return ((__builtin_constant_p(PG_mlocked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? const_test_bit(PG_mlocked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch_test_bit(PG_mlocked, &({ ((void)(sizeof(( long)(0 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_mlocked(struct folio *folio) { set_bit(PG_mlocked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageMlocked(struct page *page) { set_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_mlocked(struct folio *folio) { clear_bit(PG_mlocked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageMlocked(struct page *page) { clear_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_mlocked(struct folio *folio) { ((__builtin_constant_p(PG_mlocked) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_mlocked, folio_flags(folio, 0)) : arch___clear_bit(PG_mlocked, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageMlocked(struct page *page) { ((__builtin_constant_p(PG_mlocked) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags))) ? generic___clear_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags) : arch___clear_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags)); }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_set_mlocked(struct folio *folio) { return test_and_set_bit(PG_mlocked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestSetPageMlocked(struct page *page) { return test_and_set_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_clear_mlocked(struct folio *folio) { return test_and_clear_bit(PG_mlocked, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int TestClearPageMlocked(struct page *page) { return test_and_clear_bit(PG_mlocked, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }
# 607 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_uncached(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageUncached(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_uncached(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageUncached(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_uncached(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageUncached(struct page *page) { }







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_hwpoison(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageHWPoison(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_hwpoison(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageHWPoison(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_hwpoison(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageHWPoison(struct page *page) { }
# 639 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_reported(const struct folio *folio) { return ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_reported, const_folio_flags(folio, 0)) : arch_test_bit(PG_reported, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageReported(const struct page *page) { return ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? const_test_bit(PG_reported, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch_test_bit(PG_reported, &({ ((void)(sizeof(( long)(0 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_reported(struct folio *folio) { ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_reported, folio_flags(folio, 0)) : arch___set_bit(PG_reported, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageReported(struct page *page) { ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? generic___set_bit(PG_reported, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch___set_bit(PG_reported, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_reported(struct folio *folio) { ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_reported, folio_flags(folio, 0)) : arch___clear_bit(PG_reported, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageReported(struct page *page) { ((__builtin_constant_p(PG_reported) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags))) ? generic___clear_bit(PG_reported, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags) : arch___clear_bit(PG_reported, &({ ((void)(sizeof(( long)(1 && PageCompound(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; }); })->flags)); }




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_vmemmap_self_hosted(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageVmemmapSelfHosted(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_vmemmap_self_hosted(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageVmemmapSelfHosted(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_vmemmap_self_hosted(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageVmemmapSelfHosted(struct page *page) { }
# 682 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_mapping_flags(const struct folio *folio)
{
 return ((unsigned long)folio->mapping & (0x1 | 0x2)) != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool PageMappingFlags(const struct page *page)
{
 return ((unsigned long)page->mapping & (0x1 | 0x2)) != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_anon(const struct folio *folio)
{
 return ((unsigned long)folio->mapping & 0x1) != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool PageAnon(const struct page *page)
{
 return folio_test_anon((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __folio_test_movable(const struct folio *folio)
{
 return ((unsigned long)folio->mapping & (0x1 | 0x2)) ==
   0x2;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __PageMovable(const struct page *page)
{
 return ((unsigned long)page->mapping & (0x1 | 0x2)) ==
    0x2;
}
# 732 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_ksm(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageKsm(const struct page *page) { return 0; }


u64 stable_page_flags(const struct page *page);
# 750 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_xor_flags_has_waiters(struct folio *folio,
  unsigned long mask)
{
 return xor_unlock_is_negative_byte(mask, folio_flags(folio, 0));
}
# 766 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_uptodate(const struct folio *folio)
{
 bool ret = ((__builtin_constant_p(PG_uptodate) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_uptodate, const_folio_flags(folio, 0)) : arch_test_bit(PG_uptodate, const_folio_flags(folio, 0)));
# 777 "../include/linux/page-flags.h"
 if (ret)
  __asm__ __volatile__("": : :"memory");

 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool PageUptodate(const struct page *page)
{
 return folio_test_uptodate((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_mark_uptodate(struct folio *folio)
{
 __asm__ __volatile__("": : :"memory");
 ((__builtin_constant_p(PG_uptodate) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_uptodate, folio_flags(folio, 0)) : arch___set_bit(PG_uptodate, folio_flags(folio, 0)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_mark_uptodate(struct folio *folio)
{





 __asm__ __volatile__("": : :"memory");
 set_bit(PG_uptodate, folio_flags(folio, 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageUptodate(struct page *page)
{
 __folio_mark_uptodate((struct folio *)page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageUptodate(struct page *page)
{
 folio_mark_uptodate((struct folio *)page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_uptodate(struct folio *folio) { clear_bit(PG_uptodate, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageUptodate(struct page *page) { clear_bit(PG_uptodate, &({ ((void)(sizeof(( long)(1 && PageTail(page))))); ({ ((void)(sizeof(( long)(PagePoisoned(((typeof(page))_compound_head(page))))))); ((typeof(page))_compound_head(page)); }); })->flags); }

void __folio_start_writeback(struct folio *folio, bool keep_write);
void set_page_writeback(struct page *page);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_head(const struct folio *folio)
{
 return ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_head, const_folio_flags(folio, 0)) : arch_test_bit(PG_head, const_folio_flags(folio, 0)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageHead(const struct page *page)
{
 ({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; });
 return ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&page->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&page->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&page->flags))) ? const_test_bit(PG_head, &page->flags) : arch_test_bit(PG_head, &page->flags)) && !page_is_fake_head(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_head(struct folio *folio) { ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___set_bit(PG_head, folio_flags(folio, 0)) : arch___set_bit(PG_head, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageHead(struct page *page) { ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? generic___set_bit(PG_head, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch___set_bit(PG_head, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_head(struct folio *folio) { ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(folio_flags(folio, 0)))) ? generic___clear_bit(PG_head, folio_flags(folio, 0)) : arch___clear_bit(PG_head, folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageHead(struct page *page) { ((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? generic___clear_bit(PG_head, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch___clear_bit(PG_head, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_head(struct folio *folio) { clear_bit(PG_head, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageHead(struct page *page) { clear_bit(PG_head, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); }







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_large(const struct folio *folio)
{
 return folio_test_head(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void set_compound_head(struct page *page, struct page *head)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_87(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->compound_head) == sizeof(char) || sizeof(page->compound_head) == sizeof(short) || sizeof(page->compound_head) == sizeof(int) || sizeof(page->compound_head) == sizeof(long)) || sizeof(page->compound_head) == sizeof(long long))) __compiletime_assert_87(); } while (0); do { *(volatile typeof(page->compound_head) *)&(page->compound_head) = ((unsigned long)head + 1); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void clear_compound_head(struct page *page)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_88(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->compound_head) == sizeof(char) || sizeof(page->compound_head) == sizeof(short) || sizeof(page->compound_head) == sizeof(int) || sizeof(page->compound_head) == sizeof(long)) || sizeof(page->compound_head) == sizeof(long long))) __compiletime_assert_88(); } while (0); do { *(volatile typeof(page->compound_head) *)&(page->compound_head) = (0); } while (0); } while (0);
}
# 869 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_large_rmappable(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_large_rmappable(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_large_rmappable(struct folio *folio) { }
# 909 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_transhuge(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageTransHuge(const struct page *page) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_transcompound(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageTransCompound(const struct page *page) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_transcompoundmap(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageTransCompoundMap(const struct page *page) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_transtail(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageTransTail(const struct page *page) { return 0; }
# 925 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_has_hwpoisoned(const struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int PageHasHWPoisoned(const struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_has_hwpoisoned(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void SetPageHasHWPoisoned(struct page *page) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_clear_has_hwpoisoned(struct folio *folio) { } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ClearPageHasHWPoisoned(struct page *page) { }
 static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_set_has_hwpoisoned(struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int TestSetPageHasHWPoisoned(struct page *page) { return 0; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_clear_has_hwpoisoned(struct folio *folio) { return false; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int TestClearPageHasHWPoisoned(struct page *page) { return 0; }
# 938 "../include/linux/page-flags.h"
enum pagetype {
 PG_buddy = 0x40000000,
 PG_offline = 0x20000000,
 PG_table = 0x10000000,
 PG_guard = 0x08000000,
 PG_hugetlb = 0x04000000,
 PG_slab = 0x02000000,
 PG_zsmalloc = 0x01000000,

 PAGE_TYPE_BASE = 0x80000000,






 PAGE_MAPCOUNT_RESERVE = ~0x0000ffff,
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_type_has_type(unsigned int page_type)
{
 return (int)page_type < PAGE_MAPCOUNT_RESERVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_has_type(const struct page *page)
{
 return page_type_has_type(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_89(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_89(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }));
}
# 1009 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_buddy(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_90(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_90(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_buddy)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_buddy(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_91(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_91(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_91(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_91(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1009, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_buddy; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_buddy(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_buddy(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_buddy(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1009, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_buddy; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageBuddy(const struct page *page) { return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_92(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_92(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | PG_buddy)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageBuddy(struct page *page) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_93(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_93(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_93(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_93(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1009, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type &= ~PG_buddy; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageBuddy(struct page *page) { do { if (__builtin_expect(!!(!PageBuddy(page)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!PageBuddy(page)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1009, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type |= PG_buddy; }
# 1040 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_offline(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_94(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_94(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_offline)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_offline(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_95(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_95(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_95(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_95(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1040, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_offline; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_offline(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_offline(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_offline(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1040, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_offline; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageOffline(const struct page *page) { return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_96(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_96(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | PG_offline)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageOffline(struct page *page) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_97(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_97(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_97(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_97(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1040, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type &= ~PG_offline; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageOffline(struct page *page) { do { if (__builtin_expect(!!(!PageOffline(page)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!PageOffline(page)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1040, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type |= PG_offline; }

extern void page_offline_freeze(void);
extern void page_offline_thaw(void);
extern void page_offline_begin(void);
extern void page_offline_end(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_pgtable(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_98(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_98(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_table)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_pgtable(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_99(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_99(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_99(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_99(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1050, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_table; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_pgtable(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_pgtable(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_pgtable(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1050, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_table; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageTable(const struct page *page) { return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_100(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_100(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | PG_table)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageTable(struct page *page) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_101(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_101(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_101(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_101(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1050, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type &= ~PG_table; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageTable(struct page *page) { do { if (__builtin_expect(!!(!PageTable(page)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!PageTable(page)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1050, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type |= PG_table; }




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_guard(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_102(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_102(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_guard)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_guard(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_103(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_103(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_103(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_103(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1055, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_guard; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_guard(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_guard(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_guard(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1055, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_guard; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageGuard(const struct page *page) { return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_104(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_104(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | PG_guard)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageGuard(struct page *page) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_105(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_105(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_105(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_105(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1055, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type &= ~PG_guard; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageGuard(struct page *page) { do { if (__builtin_expect(!!(!PageGuard(page)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!PageGuard(page)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1055, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type |= PG_guard; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_slab(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_106(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_106(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_slab)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_slab(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_107(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_107(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_107(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_107(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1057, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_slab; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_slab(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_slab(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_slab(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1057, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_slab; }
# 1066 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool PageSlab(const struct page *page)
{
 return folio_test_slab((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_hugetlb(const struct folio *folio) { return false; }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_zsmalloc(const struct folio *folio){ return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_108(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_108(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | PG_zsmalloc)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_set_zsmalloc(struct folio *folio) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_109(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_109(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_109(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(folio->page.page_type) == sizeof(char) || sizeof(folio->page.page_type) == sizeof(short) || sizeof(folio->page.page_type) == sizeof(int) || sizeof(folio->page.page_type) == sizeof(long)) || sizeof(folio->page.page_type) == sizeof(long long))) __compiletime_assert_109(); } while (0); (*(const volatile typeof( _Generic((folio->page.page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (folio->page.page_type))) *)&(folio->page.page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1077, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type &= ~PG_zsmalloc; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __folio_clear_zsmalloc(struct folio *folio) { do { if (__builtin_expect(!!(!folio_test_zsmalloc(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_zsmalloc(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1077, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); folio->page.page_type |= PG_zsmalloc; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageZsmalloc(const struct page *page) { return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_110(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_110(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | PG_zsmalloc)) == PAGE_TYPE_BASE); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __SetPageZsmalloc(struct page *page) { do { if (__builtin_expect(!!(!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_111(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_111(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_111(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(page->page_type) == sizeof(char) || sizeof(page->page_type) == sizeof(short) || sizeof(page->page_type) == sizeof(int) || sizeof(page->page_type) == sizeof(long)) || sizeof(page->page_type) == sizeof(long long))) __compiletime_assert_111(); } while (0); (*(const volatile typeof( _Generic((page->page_type), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (page->page_type))) *)&(page->page_type)); }) & (PAGE_TYPE_BASE | 0)) == PAGE_TYPE_BASE)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1077, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type &= ~PG_zsmalloc; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageZsmalloc(struct page *page) { do { if (__builtin_expect(!!(!PageZsmalloc(page)), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "!PageZsmalloc(page)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page-flags.h", 1077, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0); page->page_type |= PG_zsmalloc; }
# 1087 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool PageHuge(const struct page *page)
{
 return folio_test_hugetlb((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_page_hwpoison(const struct page *page)
{
 const struct folio *folio;

 if (PageHWPoison(page))
  return true;
 folio = (_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page)));
 return folio_test_hugetlb(folio) && PageHWPoison(&folio->page);
}

bool is_free_buddy_page(const struct page *page);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool folio_test_isolated(const struct folio *folio) { return ((__builtin_constant_p(PG_isolated) && __builtin_constant_p((uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0)) && (uintptr_t)(const_folio_flags(folio, 0)) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(const_folio_flags(folio, 0)))) ? const_test_bit(PG_isolated, const_folio_flags(folio, 0)) : arch_test_bit(PG_isolated, const_folio_flags(folio, 0))); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageIsolated(const struct page *page) { return ((__builtin_constant_p(PG_isolated) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? const_test_bit(PG_isolated, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch_test_bit(PG_isolated, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_set_isolated(struct folio *folio) { set_bit(PG_isolated, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageIsolated(struct page *page) { set_bit(PG_isolated, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void folio_clear_isolated(struct folio *folio) { clear_bit(PG_isolated, folio_flags(folio, 0)); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageIsolated(struct page *page) { clear_bit(PG_isolated, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags); };

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int PageAnonExclusive(const struct page *page)
{
 ((void)(sizeof(( long)(!PageAnon(page)))));




 if (PageHuge(page))
  page = ((typeof(page))_compound_head(page));
 return ((__builtin_constant_p(PG_anon_exclusive) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? const_test_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch_test_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void SetPageAnonExclusive(struct page *page)
{
 ((void)(sizeof(( long)(!PageAnon(page) || PageKsm(page)))));
 ((void)(sizeof(( long)(PageHuge(page) && !PageHead(page)))));
 set_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void ClearPageAnonExclusive(struct page *page)
{
 ((void)(sizeof(( long)(!PageAnon(page) || PageKsm(page)))));
 ((void)(sizeof(( long)(PageHuge(page) && !PageHead(page)))));
 clear_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __ClearPageAnonExclusive(struct page *page)
{
 ((void)(sizeof(( long)(!PageAnon(page)))));
 ((void)(sizeof(( long)(PageHuge(page) && !PageHead(page)))));
 ((__builtin_constant_p(PG_anon_exclusive) && __builtin_constant_p((uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags))) ? generic___clear_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags) : arch___clear_bit(PG_anon_exclusive, &({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags));
}
# 1189 "../include/linux/page-flags.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_has_private(const struct page *page)
{
 return !!(page->flags & (1UL << PG_private | 1UL << PG_private_2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_has_private(const struct folio *folio)
{
 return page_has_private(&folio->page);
}
# 24 "../include/linux/mmzone.h" 2
# 1 "../include/linux/local_lock.h" 1




# 1 "../include/linux/local_lock_internal.h" 1
# 11 "../include/linux/local_lock_internal.h"
typedef struct {

 struct lockdep_map dep_map;
 struct task_struct *owner;

} local_lock_t;
# 27 "../include/linux/local_lock_internal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_lock_acquire(local_lock_t *l)
{
 lock_acquire(&l->dep_map, 0, 0, 0, 1, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
 ({ int __ret = 0; if (!oops_in_progress && __builtin_expect(!!(l->owner), 0)) { do { } while(0); if (debug_locks_off() && !debug_locks_silent) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/local_lock_internal.h", 30, 9, "DEBUG_LOCKS_WARN_ON(%s)", "l->owner"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); do { } while(0); __ret = 1; } __ret; });
 l->owner = (__current_thread_info->task);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_lock_release(local_lock_t *l)
{
 ({ int __ret = 0; if (!oops_in_progress && __builtin_expect(!!(l->owner != (__current_thread_info->task)), 0)) { do { } while(0); if (debug_locks_off() && !debug_locks_silent) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/local_lock_internal.h", 36, 9, "DEBUG_LOCKS_WARN_ON(%s)", "l->owner != current"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); do { } while(0); __ret = 1; } __ret; });
 l->owner = ((void *)0);
 lock_release(&l->dep_map, ({ __label__ __here; __here: (unsigned long)&&__here; }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void local_lock_debug_init(local_lock_t *l)
{
 l->owner = ((void *)0);
}
# 6 "../include/linux/local_lock.h" 2
# 54 "../include/linux/local_lock.h"
typedef local_lock_t * class_local_lock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_local_lock_destructor(local_lock_t * *p) { local_lock_t * _T = *p; if (_T) { do { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); __asm__ __volatile__("": : :"memory"); } while (0); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) local_lock_t * class_local_lock_constructor(local_lock_t * _T) { local_lock_t * t = ({ do { __asm__ __volatile__("": : :"memory"); local_lock_acquire(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); } while (0); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_local_lock_lock_ptr(class_local_lock_t *_T) { return *_T; }


typedef local_lock_t * class_local_lock_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_local_lock_irq_destructor(local_lock_t * *p) { local_lock_t * _T = *p; if (_T) { do { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0); } while (0); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) local_lock_t * class_local_lock_irq_constructor(local_lock_t * _T) { local_lock_t * t = ({ do { do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0); local_lock_acquire(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); } while (0); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_local_lock_irq_lock_ptr(class_local_lock_irq_t *_T) { return *_T; }


typedef struct { local_lock_t *lock; unsigned long flags; } class_local_lock_irqsave_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_local_lock_irqsave_destructor(class_local_lock_irqsave_t *_T) { if (_T->lock) { do { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T->lock) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T->lock)) *)(_T->lock); }); })); do { if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(_T->flags); } while (0); } while (0); } while (0); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_local_lock_irqsave_lock_ptr(class_local_lock_irqsave_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_local_lock_irqsave_t class_local_lock_irqsave_constructor(local_lock_t *l) { class_local_lock_irqsave_t _t = { .lock = l }, *_T = &_t; do { do { do { ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _T->flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(_T->flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(_T->flags); })) trace_hardirqs_off(); } while (0); local_lock_acquire(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T->lock) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T->lock)) *)(_T->lock); }); })); } while (0); return _t; }
# 71 "../include/linux/local_lock.h"
typedef local_lock_t * class_local_lock_nested_bh_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_local_lock_nested_bh_destructor(local_lock_t * *p) { local_lock_t * _T = *p; if (_T) { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) local_lock_t * class_local_lock_nested_bh_constructor(local_lock_t * _T) { local_lock_t * t = ({ do { do { ({ bool __ret_do_once = !!((debug_locks && !({ typeof(lockdep_recursion) pscr_ret__; do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(lockdep_recursion)) { case 1: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_112(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_112(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_113(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_113(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_114(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_114(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_115(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_115(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; })) && (!((preempt_count() & (((1UL << (8))-1) << (0 + 8)))) || ((preempt_count() & (((1UL << (4))-1) << ((0 + 8) + 8)))) || ((preempt_count() & (((1UL << (4))-1) << (((0 + 8) + 8) + 4)))))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/local_lock.h", 72, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }); } while (0); local_lock_acquire(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((_T) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(_T)) *)(_T); }); })); } while (0); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_local_lock_nested_bh_lock_ptr(class_local_lock_nested_bh_t *_T) { return *_T; }
# 25 "../include/linux/mmzone.h" 2
# 1 "../include/linux/zswap.h" 1







struct lruvec;

extern atomic_t zswap_stored_pages;
# 42 "../include/linux/zswap.h"
struct zswap_lruvec_state {};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zswap_store(struct folio *folio)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zswap_load(struct folio *folio)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zswap_invalidate(swp_entry_t swp) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int zswap_swapon(int type, unsigned long nr_pages)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zswap_swapoff(int type) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zswap_lruvec_state_init(struct lruvec *lruvec) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zswap_folio_swapin(struct folio *folio) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zswap_is_enabled(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zswap_never_enabled(void)
{
 return true;
}
# 26 "../include/linux/mmzone.h" 2
# 48 "../include/linux/mmzone.h"
enum migratetype {
 MIGRATE_UNMOVABLE,
 MIGRATE_MOVABLE,
 MIGRATE_RECLAIMABLE,
 MIGRATE_PCPTYPES,
 MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
# 65 "../include/linux/mmzone.h"
 MIGRATE_CMA,


 MIGRATE_ISOLATE,

 MIGRATE_TYPES
};


extern const char * const migratetype_names[MIGRATE_TYPES];
# 87 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_migrate_movable(int mt)
{
 return __builtin_expect(!!((mt) == MIGRATE_CMA), 0) || mt == MIGRATE_MOVABLE;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool migratetype_is_mergeable(int mt)
{
 return mt < MIGRATE_PCPTYPES;
}





extern int page_group_by_mobility_disabled;
# 117 "../include/linux/mmzone.h"
struct free_area {
 struct list_head free_list[MIGRATE_TYPES];
 unsigned long nr_free;
};

struct pglist_data;
# 138 "../include/linux/mmzone.h"
enum zone_stat_item {

 NR_FREE_PAGES,
 NR_ZONE_LRU_BASE,
 NR_ZONE_INACTIVE_ANON = NR_ZONE_LRU_BASE,
 NR_ZONE_ACTIVE_ANON,
 NR_ZONE_INACTIVE_FILE,
 NR_ZONE_ACTIVE_FILE,
 NR_ZONE_UNEVICTABLE,
 NR_ZONE_WRITE_PENDING,
 NR_MLOCK,

 NR_BOUNCE,



 NR_FREE_CMA_PAGES,



 NR_VM_ZONE_STAT_ITEMS };

enum node_stat_item {
 NR_LRU_BASE,
 NR_INACTIVE_ANON = NR_LRU_BASE,
 NR_ACTIVE_ANON,
 NR_INACTIVE_FILE,
 NR_ACTIVE_FILE,
 NR_UNEVICTABLE,
 NR_SLAB_RECLAIMABLE_B,
 NR_SLAB_UNRECLAIMABLE_B,
 NR_ISOLATED_ANON,
 NR_ISOLATED_FILE,
 WORKINGSET_NODES,
 WORKINGSET_REFAULT_BASE,
 WORKINGSET_REFAULT_ANON = WORKINGSET_REFAULT_BASE,
 WORKINGSET_REFAULT_FILE,
 WORKINGSET_ACTIVATE_BASE,
 WORKINGSET_ACTIVATE_ANON = WORKINGSET_ACTIVATE_BASE,
 WORKINGSET_ACTIVATE_FILE,
 WORKINGSET_RESTORE_BASE,
 WORKINGSET_RESTORE_ANON = WORKINGSET_RESTORE_BASE,
 WORKINGSET_RESTORE_FILE,
 WORKINGSET_NODERECLAIM,
 NR_ANON_MAPPED,
 NR_FILE_MAPPED,

 NR_FILE_PAGES,
 NR_FILE_DIRTY,
 NR_WRITEBACK,
 NR_WRITEBACK_TEMP,
 NR_SHMEM,
 NR_SHMEM_THPS,
 NR_SHMEM_PMDMAPPED,
 NR_FILE_THPS,
 NR_FILE_PMDMAPPED,
 NR_ANON_THPS,
 NR_VMSCAN_WRITE,
 NR_VMSCAN_IMMEDIATE,
 NR_DIRTIED,
 NR_WRITTEN,
 NR_THROTTLED_WRITTEN,
 NR_KERNEL_MISC_RECLAIMABLE,
 NR_FOLL_PIN_ACQUIRED,
 NR_FOLL_PIN_RELEASED,
 NR_KERNEL_STACK_KB,



 NR_PAGETABLE,
 NR_SECONDARY_PAGETABLE,

 NR_IOMMU_PAGES,
# 220 "../include/linux/mmzone.h"
 PGDEMOTE_KSWAPD,
 PGDEMOTE_DIRECT,
 PGDEMOTE_KHUGEPAGED,
 NR_MEMMAP,
 NR_MEMMAP_BOOT,
 NR_VM_NODE_STAT_ITEMS
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool vmstat_item_print_in_thp(enum node_stat_item item)
{
 if (!0)
  return false;

 return item == NR_ANON_THPS ||
        item == NR_FILE_THPS ||
        item == NR_SHMEM_THPS ||
        item == NR_SHMEM_PMDMAPPED ||
        item == NR_FILE_PMDMAPPED;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool vmstat_item_in_bytes(int idx)
{
# 261 "../include/linux/mmzone.h"
 return (idx == NR_SLAB_RECLAIMABLE_B ||
  idx == NR_SLAB_UNRECLAIMABLE_B);
}
# 278 "../include/linux/mmzone.h"
enum lru_list {
 LRU_INACTIVE_ANON = 0,
 LRU_ACTIVE_ANON = 0 + 1,
 LRU_INACTIVE_FILE = 0 + 2,
 LRU_ACTIVE_FILE = 0 + 2 + 1,
 LRU_UNEVICTABLE,
 NR_LRU_LISTS
};

enum vmscan_throttle_state {
 VMSCAN_THROTTLE_WRITEBACK,
 VMSCAN_THROTTLE_ISOLATED,
 VMSCAN_THROTTLE_NOPROGRESS,
 VMSCAN_THROTTLE_CONGESTED,
 NR_VMSCAN_THROTTLE,
};





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_file_lru(enum lru_list lru)
{
 return (lru == LRU_INACTIVE_FILE || lru == LRU_ACTIVE_FILE);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_active_lru(enum lru_list lru)
{
 return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE);
}





enum lruvec_flags {
# 327 "../include/linux/mmzone.h"
 LRUVEC_CGROUP_CONGESTED,
 LRUVEC_NODE_CONGESTED,
};
# 388 "../include/linux/mmzone.h"
struct lruvec;
struct page_vma_mapped_walk;






enum {
 LRU_GEN_ANON,
 LRU_GEN_FILE,
};

enum {
 LRU_GEN_CORE,
 LRU_GEN_MM_WALK,
 LRU_GEN_NONLEAF_YOUNG,
 NR_LRU_GEN_CAPS
};
# 431 "../include/linux/mmzone.h"
struct lru_gen_folio {

 unsigned long max_seq;

 unsigned long min_seq[2];

 unsigned long timestamps[4U];

 struct list_head folios[4U][2][2];

 long nr_pages[4U][2][2];

 unsigned long avg_refaulted[2][4U];

 unsigned long avg_total[2][4U];

 unsigned long protected[4U][2][4U - 1];

 atomic_long_t evicted[4U][2][4U];
 atomic_long_t refaulted[4U][2][4U];

 bool enabled;

 u8 gen;

 u8 seg;

 struct hlist_nulls_node list;
};

enum {
 MM_LEAF_TOTAL,
 MM_LEAF_OLD,
 MM_LEAF_YOUNG,
 MM_NONLEAF_TOTAL,
 MM_NONLEAF_FOUND,
 MM_NONLEAF_ADDED,
 NR_MM_STATS
};




struct lru_gen_mm_state {

 unsigned long seq;

 struct list_head *head;

 struct list_head *tail;

 unsigned long *filters[2];

 unsigned long stats[4U][NR_MM_STATS];
};

struct lru_gen_mm_walk {

 struct lruvec *lruvec;

 unsigned long seq;

 unsigned long next_addr;

 int nr_pages[4U][2][2];

 int mm_stats[NR_MM_STATS];

 int batched;
 bool can_swap;
 bool force_scan;
};
# 549 "../include/linux/mmzone.h"
struct lru_gen_memcg {

 unsigned long seq;

 unsigned long nr_memcgs[3];

 struct hlist_nulls_head fifo[3][8];

 spinlock_t lock;
};

void lru_gen_init_pgdat(struct pglist_data *pgdat);
void lru_gen_init_lruvec(struct lruvec *lruvec);
void lru_gen_look_around(struct page_vma_mapped_walk *pvmw);

void lru_gen_init_memcg(struct mem_cgroup *memcg);
void lru_gen_exit_memcg(struct mem_cgroup *memcg);
void lru_gen_online_memcg(struct mem_cgroup *memcg);
void lru_gen_offline_memcg(struct mem_cgroup *memcg);
void lru_gen_release_memcg(struct mem_cgroup *memcg);
void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid);
# 611 "../include/linux/mmzone.h"
struct lruvec {
 struct list_head lists[NR_LRU_LISTS];

 spinlock_t lru_lock;





 unsigned long anon_cost;
 unsigned long file_cost;

 atomic_long_t nonresident_age;

 unsigned long refaults[2];

 unsigned long flags;


 struct lru_gen_folio lrugen;
# 639 "../include/linux/mmzone.h"
 struct zswap_lruvec_state zswap_lruvec_state;
};







typedef unsigned isolate_mode_t;

enum zone_watermarks {
 WMARK_MIN,
 WMARK_LOW,
 WMARK_HIGH,
 WMARK_PROMO,
 NR_WMARK
};
# 691 "../include/linux/mmzone.h"
struct per_cpu_pages {
 spinlock_t lock;
 int count;
 int high;
 int high_min;
 int high_max;
 int batch;
 u8 flags;
 u8 alloc_factor;



 short free_count;


 struct list_head lists[((MIGRATE_PCPTYPES * (3 + 1)) + 0)];
} ;

struct per_cpu_zonestat {
# 722 "../include/linux/mmzone.h"
};

struct per_cpu_nodestat {
 s8 stat_threshold;
 s8 vm_node_stat_diff[NR_VM_NODE_STAT_ITEMS];
};



enum zone_type {
# 753 "../include/linux/mmzone.h"
 ZONE_NORMAL,
# 814 "../include/linux/mmzone.h"
 ZONE_MOVABLE,



 __MAX_NR_ZONES

};





struct zone {



 unsigned long _watermark[NR_WMARK];
 unsigned long watermark_boost;

 unsigned long nr_reserved_highatomic;
# 844 "../include/linux/mmzone.h"
 long lowmem_reserve[2];




 struct pglist_data *zone_pgdat;
 struct per_cpu_pages *per_cpu_pageset;
 struct per_cpu_zonestat *per_cpu_zonestats;




 int pageset_high_min;
 int pageset_high_max;
 int pageset_batch;






 unsigned long *pageblock_flags;



 unsigned long zone_start_pfn;
# 913 "../include/linux/mmzone.h"
 atomic_long_t managed_pages;
 unsigned long spanned_pages;
 unsigned long present_pages;




 unsigned long cma_pages;


 const char *name;







 unsigned long nr_isolate_pageblock;







 int initialized;


                          ;


 struct free_area free_area[(10 + 1)];







 unsigned long flags;


 spinlock_t lock;


                          ;






 unsigned long percpu_drift_mark;



 unsigned long compact_cached_free_pfn;

 unsigned long compact_cached_migrate_pfn[2];
 unsigned long compact_init_migrate_pfn;
 unsigned long compact_init_free_pfn;
# 984 "../include/linux/mmzone.h"
 unsigned int compact_considered;
 unsigned int compact_defer_shift;
 int compact_order_failed;




 bool compact_blockskip_flush;


 bool contiguous;

                          ;

 atomic_long_t vm_stat[NR_VM_ZONE_STAT_ITEMS];
 atomic_long_t vm_numa_event[0];
} ;

enum pgdat_flags {
 PGDAT_DIRTY,



 PGDAT_WRITEBACK,


 PGDAT_RECLAIM_LOCKED,
};

enum zone_flags {
 ZONE_BOOSTED_WATERMARK,


 ZONE_RECLAIM_ACTIVE,
 ZONE_BELOW_HIGH,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long zone_managed_pages(struct zone *zone)
{
 return (unsigned long)atomic_long_read(&zone->managed_pages);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long zone_cma_pages(struct zone *zone)
{

 return zone->cma_pages;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long zone_end_pfn(const struct zone *zone)
{
 return zone->zone_start_pfn + zone->spanned_pages;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
{
 return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_is_initialized(struct zone *zone)
{
 return zone->initialized;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_is_empty(struct zone *zone)
{
 return zone->spanned_pages == 0;
}
# 1101 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum zone_type page_zonenum(const struct page *page)
{
 do { kcsan_set_access_mask(((1UL << 1) - 1) << (((((sizeof(unsigned long)*8) - 0) - 0) - 1) * (1 != 0))); __kcsan_check_access(&(page->flags), sizeof(page->flags), (1 << 3)); kcsan_set_access_mask(0); kcsan_atomic_next(1); } while (0);
 return (page->flags >> (((((sizeof(unsigned long)*8) - 0) - 0) - 1) * (1 != 0))) & ((1UL << 1) - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum zone_type folio_zonenum(const struct folio *folio)
{
 return page_zonenum(&folio->page);
}
# 1139 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_zone_device_page(const struct page *page)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_device_pages_have_same_pgmap(const struct page *a,
           const struct page *b)
{
 return true;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_zone_device(const struct folio *folio)
{
 return is_zone_device_page(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_zone_movable_page(const struct page *page)
{
 return page_zonenum(page) == ZONE_MOVABLE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_zone_movable(const struct folio *folio)
{
 return folio_zonenum(folio) == ZONE_MOVABLE;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_intersects(struct zone *zone,
  unsigned long start_pfn, unsigned long nr_pages)
{
 if (zone_is_empty(zone))
  return false;
 if (start_pfn >= zone_end_pfn(zone) ||
     start_pfn + nr_pages <= zone->zone_start_pfn)
  return false;

 return true;
}
# 1192 "../include/linux/mmzone.h"
enum {
 ZONELIST_FALLBACK,







 MAX_ZONELISTS
};





struct zoneref {
 struct zone *zone;
 int zone_idx;
};
# 1227 "../include/linux/mmzone.h"
struct zonelist {
 struct zoneref _zonerefs[((1 << 0) * 2) + 1];
};






extern struct page *mem_map;
# 1279 "../include/linux/mmzone.h"
typedef struct pglist_data {





 struct zone node_zones[2];






 struct zonelist node_zonelists[MAX_ZONELISTS];

 int nr_zones;

 struct page *node_mem_map;

 struct page_ext *node_page_ext;
# 1316 "../include/linux/mmzone.h"
 unsigned long node_start_pfn;
 unsigned long node_present_pages;
 unsigned long node_spanned_pages;

 int node_id;
 wait_queue_head_t kswapd_wait;
 wait_queue_head_t pfmemalloc_wait;


 wait_queue_head_t reclaim_wait[NR_VMSCAN_THROTTLE];

 atomic_t nr_writeback_throttled;
 unsigned long nr_reclaim_start;




 struct task_struct *kswapd;
 int kswapd_order;
 enum zone_type kswapd_highest_zoneidx;

 int kswapd_failures;


 int kcompactd_max_order;
 enum zone_type kcompactd_highest_zoneidx;
 wait_queue_head_t kcompactd_wait;
 struct task_struct *kcompactd;
 bool proactive_compact_trigger;





 unsigned long totalreserve_pages;
# 1361 "../include/linux/mmzone.h"
                          ;
# 1397 "../include/linux/mmzone.h"
 struct lruvec __lruvec;

 unsigned long flags;



 struct lru_gen_mm_walk mm_walk;

 struct lru_gen_memcg memcg_lru;


                          ;


 struct per_cpu_nodestat *per_cpu_nodestats;
 atomic_long_t vm_stat[NR_VM_NODE_STAT_ITEMS];






} pg_data_t;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pgdat_end_pfn(pg_data_t *pgdat)
{
 return pgdat->node_start_pfn + pgdat->node_spanned_pages;
}

# 1 "../include/linux/memory_hotplug.h" 1




# 1 "../include/linux/mmzone.h" 1
# 6 "../include/linux/memory_hotplug.h" 2

# 1 "../include/linux/notifier.h" 1
# 16 "../include/linux/notifier.h"
# 1 "../include/linux/srcu.h" 1
# 22 "../include/linux/srcu.h"
# 1 "../include/linux/rcu_segcblist.h" 1
# 21 "../include/linux/rcu_segcblist.h"
struct rcu_cblist {
 struct callback_head *head;
 struct callback_head **tail;
 long len;
};
# 194 "../include/linux/rcu_segcblist.h"
struct rcu_segcblist {
 struct callback_head *head;
 struct callback_head **tails[4];
 unsigned long gp_seq[4];



 long len;

 long seglen[4];
 u8 flags;
};
# 23 "../include/linux/srcu.h" 2

struct srcu_struct;



int __init_srcu_struct(struct srcu_struct *ssp, const char *name,
         struct lock_class_key *key);
# 47 "../include/linux/srcu.h"
# 1 "../include/linux/srcutiny.h" 1
# 16 "../include/linux/srcutiny.h"
struct srcu_struct {
 short srcu_lock_nesting[2];
 u8 srcu_gp_running;
 u8 srcu_gp_waiting;
 unsigned long srcu_idx;
 unsigned long srcu_idx_max;
 struct swait_queue_head srcu_wq;

 struct callback_head *srcu_cb_head;
 struct callback_head **srcu_cb_tail;
 struct work_struct srcu_work;

 struct lockdep_map dep_map;

};

void srcu_drive_gp(struct work_struct *wp);
# 52 "../include/linux/srcutiny.h"
struct srcu_usage { };


void synchronize_srcu(struct srcu_struct *ssp);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __srcu_read_lock(struct srcu_struct *ssp)
{
 int idx;

 __asm__ __volatile__("": : :"memory");
 idx = ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_116(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_idx) == sizeof(char) || sizeof(ssp->srcu_idx) == sizeof(short) || sizeof(ssp->srcu_idx) == sizeof(int) || sizeof(ssp->srcu_idx) == sizeof(long)) || sizeof(ssp->srcu_idx) == sizeof(long long))) __compiletime_assert_116(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_idx), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_idx))) *)&(ssp->srcu_idx)); }) + 1) & 0x2) >> 1;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_118(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(char) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(short) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(int) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long)) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long long))) __compiletime_assert_118(); } while (0); do { *(volatile typeof(ssp->srcu_lock_nesting[idx]) *)&(ssp->srcu_lock_nesting[idx]) = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_117(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(char) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(short) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(int) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long)) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long long))) __compiletime_assert_117(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_lock_nesting[idx]), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_lock_nesting[idx]))) *)&(ssp->srcu_lock_nesting[idx])); }) + 1); } while (0); } while (0);
 __asm__ __volatile__("": : :"memory");
 return idx;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void synchronize_srcu_expedited(struct srcu_struct *ssp)
{
 synchronize_srcu(ssp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_barrier(struct srcu_struct *ssp)
{
 synchronize_srcu(ssp);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_torture_stats_print(struct srcu_struct *ssp,
         char *tt, char *tf)
{
 int idx;

 idx = ((({ __kcsan_disable_current(); __auto_type __v = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_119(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_idx) == sizeof(char) || sizeof(ssp->srcu_idx) == sizeof(short) || sizeof(ssp->srcu_idx) == sizeof(int) || sizeof(ssp->srcu_idx) == sizeof(long)) || sizeof(ssp->srcu_idx) == sizeof(long long))) __compiletime_assert_119(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_idx), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_idx))) *)&(ssp->srcu_idx)); })); __kcsan_enable_current(); __v; }) + 1) & 0x2) >> 1;
 ({ do {} while (0); _printk("\001" "1" "%s%s Tiny SRCU per-CPU(idx=%d): (%hd,%hd) gp: %lu->%lu\n", tt, tf, idx, ({ __kcsan_disable_current(); __auto_type __v = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_120(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_lock_nesting[!idx]) == sizeof(char) || sizeof(ssp->srcu_lock_nesting[!idx]) == sizeof(short) || sizeof(ssp->srcu_lock_nesting[!idx]) == sizeof(int) || sizeof(ssp->srcu_lock_nesting[!idx]) == sizeof(long)) || sizeof(ssp->srcu_lock_nesting[!idx]) == sizeof(long long))) __compiletime_assert_120(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_lock_nesting[!idx]), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_lock_nesting[!idx]))) *)&(ssp->srcu_lock_nesting[!idx])); })); __kcsan_enable_current(); __v; }), ({ __kcsan_disable_current(); __auto_type __v = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_121(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(char) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(short) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(int) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long)) || sizeof(ssp->srcu_lock_nesting[idx]) == sizeof(long long))) __compiletime_assert_121(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_lock_nesting[idx]), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_lock_nesting[idx]))) *)&(ssp->srcu_lock_nesting[idx])); })); __kcsan_enable_current(); __v; }), ({ __kcsan_disable_current(); __auto_type __v = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_122(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_idx) == sizeof(char) || sizeof(ssp->srcu_idx) == sizeof(short) || sizeof(ssp->srcu_idx) == sizeof(int) || sizeof(ssp->srcu_idx) == sizeof(long)) || sizeof(ssp->srcu_idx) == sizeof(long long))) __compiletime_assert_122(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_idx), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_idx))) *)&(ssp->srcu_idx)); })); __kcsan_enable_current(); __v; }), ({ __kcsan_disable_current(); __auto_type __v = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_123(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ssp->srcu_idx_max) == sizeof(char) || sizeof(ssp->srcu_idx_max) == sizeof(short) || sizeof(ssp->srcu_idx_max) == sizeof(int) || sizeof(ssp->srcu_idx_max) == sizeof(long)) || sizeof(ssp->srcu_idx_max) == sizeof(long long))) __compiletime_assert_123(); } while (0); (*(const volatile typeof( _Generic((ssp->srcu_idx_max), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ssp->srcu_idx_max))) *)&(ssp->srcu_idx_max)); })); __kcsan_enable_current(); __v; })); });





}
# 48 "../include/linux/srcu.h" 2






void call_srcu(struct srcu_struct *ssp, struct callback_head *head,
  void (*func)(struct callback_head *head));
void cleanup_srcu_struct(struct srcu_struct *ssp);
int __srcu_read_lock(struct srcu_struct *ssp) ;
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) ;
void synchronize_srcu(struct srcu_struct *ssp);
# 69 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_completed_synchronize_srcu(void)
{
 return 0x1;
}

unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
# 94 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool same_state_synchronize_srcu(unsigned long oldstate1, unsigned long oldstate2)
{
 return oldstate1 == oldstate2;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
{
 return __srcu_read_lock(ssp);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
{
 __srcu_read_unlock(ssp, idx);
}


void srcu_init(void);
# 133 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int srcu_read_lock_held(const struct srcu_struct *ssp)
{
 if (!debug_lockdep_rcu_enabled())
  return 1;
 return lock_is_held(&ssp->dep_map);
}
# 149 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_lock_acquire(struct lockdep_map *map)
{
 lock_acquire(map, 0, 0, 2, 1, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_lock_release(struct lockdep_map *map)
{
 lock_release(map, ({ __label__ __here; __here: (unsigned long)&&__here; }));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_lock_sync(struct lockdep_map *map)
{
 lock_sync(map, 0, 0, 1, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
}
# 186 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_check_nmi_safety(struct srcu_struct *ssp,
      bool nmi_safe) { }
# 244 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int srcu_read_lock(struct srcu_struct *ssp)
{
 int retval;

 srcu_check_nmi_safety(ssp, false);
 retval = __srcu_read_lock(ssp);
 srcu_lock_acquire(&ssp->dep_map);
 return retval;
}
# 261 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int srcu_read_lock_nmisafe(struct srcu_struct *ssp)
{
 int retval;

 srcu_check_nmi_safety(ssp, true);
 retval = __srcu_read_lock_nmisafe(ssp);
 rcu_try_lock_acquire(&ssp->dep_map);
 return retval;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) int
srcu_read_lock_notrace(struct srcu_struct *ssp)
{
 int retval;

 srcu_check_nmi_safety(ssp, false);
 retval = __srcu_read_lock(ssp);
 return retval;
}
# 303 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int srcu_down_read(struct srcu_struct *ssp)
{
 ({ bool __ret_do_once = !!(((preempt_count() & (((1UL << (4))-1) << (((0 + 8) + 8) + 4))))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/srcu.h", 305, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 srcu_check_nmi_safety(ssp, false);
 return __srcu_read_lock(ssp);
}
# 317 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_read_unlock(struct srcu_struct *ssp, int idx)

{
 ({ bool __ret_do_once = !!(idx & ~0x1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/srcu.h", 320, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 srcu_check_nmi_safety(ssp, false);
 srcu_lock_release(&ssp->dep_map);
 __srcu_read_unlock(ssp, idx);
}
# 333 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)

{
 ({ bool __ret_do_once = !!(idx & ~0x1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/srcu.h", 336, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 srcu_check_nmi_safety(ssp, true);
 rcu_lock_release(&ssp->dep_map);
 __srcu_read_unlock_nmisafe(ssp, idx);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) void
srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx)
{
 srcu_check_nmi_safety(ssp, false);
 __srcu_read_unlock(ssp, idx);
}
# 358 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void srcu_up_read(struct srcu_struct *ssp, int idx)

{
 ({ bool __ret_do_once = !!(idx & ~0x1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/srcu.h", 361, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 ({ bool __ret_do_once = !!(((preempt_count() & (((1UL << (4))-1) << (((0 + 8) + 8) + 4))))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/srcu.h", 362, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 srcu_check_nmi_safety(ssp, false);
 __srcu_read_unlock(ssp, idx);
}
# 376 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_mb__after_srcu_read_unlock(void)
{

}
# 390 "../include/linux/srcu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_mb__after_srcu_read_lock(void)
{

}

typedef struct { struct srcu_struct *lock; int idx; } class_srcu_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_srcu_destructor(class_srcu_t *_T) { if (_T->lock) { srcu_read_unlock(_T->lock, _T->idx); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_srcu_lock_ptr(class_srcu_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_srcu_t class_srcu_constructor(struct srcu_struct *l) { class_srcu_t _t = { .lock = l }, *_T = &_t; _T->idx = srcu_read_lock(_T->lock); return _t; }
# 17 "../include/linux/notifier.h" 2
# 49 "../include/linux/notifier.h"
struct notifier_block;

typedef int (*notifier_fn_t)(struct notifier_block *nb,
   unsigned long action, void *data);

struct notifier_block {
 notifier_fn_t notifier_call;
 struct notifier_block *next;
 int priority;
};

struct atomic_notifier_head {
 spinlock_t lock;
 struct notifier_block *head;
};

struct blocking_notifier_head {
 struct rw_semaphore rwsem;
 struct notifier_block *head;
};

struct raw_notifier_head {
 struct notifier_block *head;
};

struct srcu_notifier_head {
 struct mutex mutex;
 struct srcu_usage srcuu;
 struct srcu_struct srcu;
 struct notifier_block *head;
};
# 94 "../include/linux/notifier.h"
extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
# 146 "../include/linux/notifier.h"
extern int atomic_notifier_chain_register(struct atomic_notifier_head *nh,
  struct notifier_block *nb);
extern int blocking_notifier_chain_register(struct blocking_notifier_head *nh,
  struct notifier_block *nb);
extern int raw_notifier_chain_register(struct raw_notifier_head *nh,
  struct notifier_block *nb);
extern int srcu_notifier_chain_register(struct srcu_notifier_head *nh,
  struct notifier_block *nb);

extern int atomic_notifier_chain_register_unique_prio(
  struct atomic_notifier_head *nh, struct notifier_block *nb);
extern int blocking_notifier_chain_register_unique_prio(
  struct blocking_notifier_head *nh, struct notifier_block *nb);

extern int atomic_notifier_chain_unregister(struct atomic_notifier_head *nh,
  struct notifier_block *nb);
extern int blocking_notifier_chain_unregister(struct blocking_notifier_head *nh,
  struct notifier_block *nb);
extern int raw_notifier_chain_unregister(struct raw_notifier_head *nh,
  struct notifier_block *nb);
extern int srcu_notifier_chain_unregister(struct srcu_notifier_head *nh,
  struct notifier_block *nb);

extern int atomic_notifier_call_chain(struct atomic_notifier_head *nh,
  unsigned long val, void *v);
extern int blocking_notifier_call_chain(struct blocking_notifier_head *nh,
  unsigned long val, void *v);
extern int raw_notifier_call_chain(struct raw_notifier_head *nh,
  unsigned long val, void *v);
extern int srcu_notifier_call_chain(struct srcu_notifier_head *nh,
  unsigned long val, void *v);

extern int blocking_notifier_call_chain_robust(struct blocking_notifier_head *nh,
  unsigned long val_up, unsigned long val_down, void *v);
extern int raw_notifier_call_chain_robust(struct raw_notifier_head *nh,
  unsigned long val_up, unsigned long val_down, void *v);

extern bool atomic_notifier_call_chain_is_empty(struct atomic_notifier_head *nh);
# 196 "../include/linux/notifier.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int notifier_from_errno(int err)
{
 if (err)
  return 0x8000 | (0x0001 - err);

 return 0x0001;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int notifier_to_errno(int ret)
{
 ret &= ~0x8000;
 return ret > 0x0001 ? 0x0001 - ret : 0;
}
# 240 "../include/linux/notifier.h"
extern struct blocking_notifier_head reboot_notifier_list;
# 8 "../include/linux/memory_hotplug.h" 2


struct page;
struct zone;
struct pglist_data;
struct mem_section;
struct memory_group;
struct resource;
struct vmem_altmap;
struct dev_pagemap;
# 56 "../include/linux/memory_hotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pg_data_t *generic_alloc_nodedata(int nid)
{
 do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/memory_hotplug.h", 58, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
{
}
# 254 "../include/linux/memory_hotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned zone_span_seqbegin(struct zone *zone)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int zone_span_seqretry(struct zone *zone, unsigned iv)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_span_writelock(struct zone *zone) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_span_writeunlock(struct zone *zone) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_seqlock_init(struct zone *zone) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int try_online_node(int nid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void get_online_mems(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_online_mems(void) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_hotplug_begin(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_hotplug_done(void) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool movable_node_is_enabled(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mhp_supports_memmap_on_memory(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_kswapd_lock(pg_data_t *pgdat) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_kswapd_unlock(pg_data_t *pgdat) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_kswapd_lock_init(pg_data_t *pgdat) {}







struct range arch_get_mappable_range(void);
# 322 "../include/linux/memory_hotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_resize_lock(struct pglist_data *p, unsigned long *f) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_resize_unlock(struct pglist_data *p, unsigned long *f) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgdat_resize_init(struct pglist_data *pgdat) {}
# 337 "../include/linux/memory_hotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void try_offline_node(int nid) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
    struct zone *zone, struct memory_group *group)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int remove_memory(u64 start, u64 size)
{
 return -16;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __remove_memory(u64 start, u64 size) {}
# 1433 "../include/linux/mmzone.h" 2

void build_all_zonelists(pg_data_t *pgdat);
void wakeup_kswapd(struct zone *zone, gfp_t gfp_mask, int order,
     enum zone_type highest_zoneidx);
bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
    int highest_zoneidx, unsigned int alloc_flags,
    long free_pages);
bool zone_watermark_ok(struct zone *z, unsigned int order,
  unsigned long mark, int highest_zoneidx,
  unsigned int alloc_flags);
bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
  unsigned long mark, int highest_zoneidx);




enum meminit_context {
 MEMINIT_EARLY,
 MEMINIT_HOTPLUG,
};

extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,
         unsigned long size);

extern void lruvec_init(struct lruvec *lruvec);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pglist_data *lruvec_pgdat(struct lruvec *lruvec)
{



 return ({ void *__mptr = (void *)(lruvec); _Static_assert(__builtin_types_compatible_p(typeof(*(lruvec)), typeof(((struct pglist_data *)0)->__lruvec)) || __builtin_types_compatible_p(typeof(*(lruvec)), typeof(void)), "pointer type mismatch in container_of()"); ((struct pglist_data *)(__mptr - __builtin_offsetof(struct pglist_data, __lruvec))); });

}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int local_memory_node(int node_id) { return node_id; };
# 1485 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool zone_is_zone_device(struct zone *zone)
{
 return false;
}
# 1497 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool managed_zone(struct zone *zone)
{
 return zone_managed_pages(zone);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool populated_zone(struct zone *zone)
{
 return zone->present_pages;
}
# 1519 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int zone_to_nid(struct zone *zone)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_set_nid(struct zone *zone, int nid) {}


extern int movable_zone;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_highmem_idx(enum zone_type idx)
{




 return 0;

}
# 1546 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_highmem(struct zone *zone)
{
 return is_highmem_idx(((zone) - (zone)->zone_pgdat->node_zones));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool has_managed_dma(void)
{
 return false;
}





extern struct pglist_data contig_page_data;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pglist_data *NODE_DATA(int nid)
{
 return &contig_page_data;
}







extern struct pglist_data *first_online_pgdat(void);
extern struct pglist_data *next_online_pgdat(struct pglist_data *pgdat);
extern struct zone *next_zone(struct zone *zone);
# 1607 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct zone *zonelist_zone(struct zoneref *zoneref)
{
 return zoneref->zone;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int zonelist_zone_idx(struct zoneref *zoneref)
{
 return zoneref->zone_idx;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int zonelist_node_idx(struct zoneref *zoneref)
{
 return zone_to_nid(zoneref->zone);
}

struct zoneref *__next_zones_zonelist(struct zoneref *z,
     enum zone_type highest_zoneidx,
     nodemask_t *nodes);
# 1641 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct zoneref *next_zones_zonelist(struct zoneref *z,
     enum zone_type highest_zoneidx,
     nodemask_t *nodes)
{
 if (__builtin_expect(!!(!nodes && zonelist_zone_idx(z) <= highest_zoneidx), 1))
  return z;
 return __next_zones_zonelist(z, highest_zoneidx, nodes);
}
# 1667 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
     enum zone_type highest_zoneidx,
     nodemask_t *nodes)
{
 return next_zones_zonelist(zonelist->_zonerefs,
       highest_zoneidx, nodes);
}
# 1712 "../include/linux/mmzone.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool movable_only_nodes(nodemask_t *nodes)
{
 struct zonelist *zonelist;
 struct zoneref *z;
 int nid;

 if (__nodes_empty(&(*nodes), (1 << 0)))
  return false;






 nid = __first_node(&(*nodes));
 zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK];
 z = first_zones_zonelist(zonelist, ZONE_NORMAL, nodes);
 return (!z->zone) ? true : false;
}
# 8 "../include/linux/gfp.h" 2
# 1 "../include/linux/topology.h" 1
# 30 "../include/linux/topology.h"
# 1 "../include/linux/arch_topology.h" 1
# 11 "../include/linux/arch_topology.h"
void topology_normalize_cpu_scale(void);
int topology_update_cpu_topology(void);





struct device_node;
bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_cpu_scale; extern __attribute__((section(".data" ""))) __typeof__(unsigned long) cpu_scale;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long topology_get_cpu_scale(int cpu)
{
 return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(cpu_scale)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(cpu_scale))) *)(&(cpu_scale)); }); }));
}

void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity);

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_capacity_freq_ref; extern __attribute__((section(".data" ""))) __typeof__(unsigned long) capacity_freq_ref;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long topology_get_freq_ref(int cpu)
{
 return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(capacity_freq_ref)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(capacity_freq_ref))) *)(&(capacity_freq_ref)); }); }));
}

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_arch_freq_scale; extern __attribute__((section(".data" ""))) __typeof__(unsigned long) arch_freq_scale;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long topology_get_freq_scale(int cpu)
{
 return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(arch_freq_scale)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(arch_freq_scale))) *)(&(arch_freq_scale)); }); }));
}

void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
        unsigned long max_freq);
bool topology_scale_freq_invariant(void);

enum scale_freq_source {
 SCALE_FREQ_SOURCE_CPUFREQ = 0,
 SCALE_FREQ_SOURCE_ARCH,
 SCALE_FREQ_SOURCE_CPPC,
};

struct scale_freq_data {
 enum scale_freq_source source;
 void (*set_freq_scale)(void);
};

void topology_scale_freq_tick(void);
void topology_set_scale_freq_source(struct scale_freq_data *data, const struct cpumask *cpus);
void topology_clear_scale_freq_source(enum scale_freq_source source, const struct cpumask *cpus);

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_hw_pressure; extern __attribute__((section(".data" ""))) __typeof__(unsigned long) hw_pressure;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long topology_get_hw_pressure(int cpu)
{
 return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(hw_pressure)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hw_pressure))) *)(&(hw_pressure)); }); }));
}

void topology_update_hw_pressure(const struct cpumask *cpus,
          unsigned long capped_freq);

struct cpu_topology {
 int thread_id;
 int core_id;
 int cluster_id;
 int package_id;
 cpumask_t thread_sibling;
 cpumask_t core_sibling;
 cpumask_t cluster_sibling;
 cpumask_t llc_sibling;
};
# 31 "../include/linux/topology.h" 2





# 1 "./arch/hexagon/include/generated/asm/topology.h" 1
# 1 "../include/asm-generic/topology.h" 1
# 2 "./arch/hexagon/include/generated/asm/topology.h" 2
# 37 "../include/linux/topology.h" 2
# 46 "../include/linux/topology.h"
int arch_update_cpu_topology(void);
# 76 "../include/linux/topology.h"
extern int node_reclaim_distance;
# 118 "../include/linux/topology.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int numa_node_id(void)
{
 return ((void)(0),0);
}
# 168 "../include/linux/topology.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int numa_mem_id(void)
{
 return numa_node_id();
}
# 243 "../include/linux/topology.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cpumask *cpu_cpu_mask(int cpu)
{
 return ((void)(((void)(cpu),0)), ((const struct cpumask *)&__cpu_online_mask));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
{
 return cpumask_nth_and(cpu, cpus, ((const struct cpumask *)&__cpu_online_mask));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cpumask *
sched_numa_hop_mask(unsigned int node, unsigned int hops)
{
 return ERR_PTR(-95);
}
# 9 "../include/linux/gfp.h" 2



struct vm_area_struct;
struct mempolicy;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int gfp_migratetype(const gfp_t gfp_flags)
{
 (void)({ int __ret_warn_on = !!((gfp_flags & ((( gfp_t)((((1UL))) << (___GFP_RECLAIMABLE_BIT)))|(( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT))))) == ((( gfp_t)((((1UL))) << (___GFP_RECLAIMABLE_BIT)))|(( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT))))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/gfp.h", 21, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_124(void) __attribute__((__error__("BUILD_BUG_ON failed: " "(1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE"))); if (!(!((1UL << 3) != ((((1UL))) << (___GFP_MOVABLE_BIT))))) __compiletime_assert_124(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_125(void) __attribute__((__error__("BUILD_BUG_ON failed: " "(___GFP_MOVABLE >> GFP_MOVABLE_SHIFT) != MIGRATE_MOVABLE"))); if (!(!((((((1UL))) << (___GFP_MOVABLE_BIT)) >> 3) != MIGRATE_MOVABLE))) __compiletime_assert_125(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_126(void) __attribute__((__error__("BUILD_BUG_ON failed: " "(___GFP_RECLAIMABLE >> GFP_MOVABLE_SHIFT) != MIGRATE_RECLAIMABLE"))); if (!(!((((((1UL))) << (___GFP_RECLAIMABLE_BIT)) >> 3) != MIGRATE_RECLAIMABLE))) __compiletime_assert_126(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_127(void) __attribute__((__error__("BUILD_BUG_ON failed: " "((___GFP_MOVABLE | ___GFP_RECLAIMABLE) >> GFP_MOVABLE_SHIFT) != MIGRATE_HIGHATOMIC"))); if (!(!(((((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_RECLAIMABLE_BIT))) >> 3) != MIGRATE_HIGHATOMIC))) __compiletime_assert_127(); } while (0);


 if (__builtin_expect(!!(page_group_by_mobility_disabled), 0))
  return MIGRATE_UNMOVABLE;


 return ( unsigned long)(gfp_flags & ((( gfp_t)((((1UL))) << (___GFP_RECLAIMABLE_BIT)))|(( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT))))) >> 3;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gfpflags_allow_blocking(const gfp_t gfp_flags)
{
 return !!(gfp_flags & (( gfp_t)((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))));
}
# 132 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum zone_type gfp_zone(gfp_t flags)
{
 enum zone_type z;
 int bit = ( int) (flags & ((( gfp_t)((((1UL))) << (___GFP_DMA_BIT)))|(( gfp_t)((((1UL))) << (___GFP_HIGHMEM_BIT)))|(( gfp_t)((((1UL))) << (___GFP_DMA32_BIT)))|(( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT)))));

 z = (( (ZONE_NORMAL << 0 * 1) | (ZONE_NORMAL << ((((1UL))) << (___GFP_DMA_BIT)) * 1) | (ZONE_NORMAL << ((((1UL))) << (___GFP_HIGHMEM_BIT)) * 1) | (ZONE_NORMAL << ((((1UL))) << (___GFP_DMA32_BIT)) * 1) | (ZONE_NORMAL << ((((1UL))) << (___GFP_MOVABLE_BIT)) * 1) | (ZONE_NORMAL << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_DMA_BIT))) * 1) | (ZONE_MOVABLE << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) * 1) | (ZONE_NORMAL << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT))) * 1)) >> (bit * 1)) &
      ((1 << 1) - 1);
 do { if (__builtin_expect(!!((( 1 << (((((1UL))) << (___GFP_DMA_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) | 1 << (((((1UL))) << (___GFP_DMA_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT))) | 1 << (((((1UL))) << (___GFP_DMA32_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) | 1 << (((((1UL))) << (___GFP_DMA_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) | 1 << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT)) | ((((1UL))) << (___GFP_DMA_BIT))) | 1 << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT)) | ((((1UL))) << (___GFP_DMA_BIT))) | 1 << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) | 1 << (((((1UL))) << (___GFP_MOVABLE_BIT)) | ((((1UL))) << (___GFP_DMA32_BIT)) | ((((1UL))) << (___GFP_DMA_BIT)) | ((((1UL))) << (___GFP_HIGHMEM_BIT))) ) >> bit) & 1), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/gfp.h", 139, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 return z;
}
# 150 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int gfp_zonelist(gfp_t flags)
{




 return ZONELIST_FALLBACK;
}
# 178 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t gfp_nested_mask(gfp_t flags)
{
 return ((flags & (((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT)))) | ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_NOLOCKDEP_BIT))))) |
  ((( gfp_t)((((1UL))) << (___GFP_NORETRY_BIT))) | (( gfp_t)((((1UL))) << (___GFP_NOMEMALLOC_BIT))) | (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)))));
}
# 193 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct zonelist *node_zonelist(int nid, gfp_t flags)
{
 return NODE_DATA(nid)->node_zonelists + gfp_zonelist(flags);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_free_page(struct page *page, int order) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_alloc_page(struct page *page, int order) { }


struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, int preferred_nid,
  nodemask_t *nodemask);


struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid,
  nodemask_t *nodemask);


unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
    nodemask_t *nodemask, int nr_pages,
    struct list_head *page_list,
    struct page **page_array);


unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp,
    unsigned long nr_pages,
    struct page **page_array);
# 232 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages,
       struct page **page_array)
{
 if (nid == (-1))
  nid = numa_mem_id();

 return alloc_pages_bulk_noprof(gfp, nid, ((void *)0), nr_pages, ((void *)0), page_array);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void warn_if_node_offline(int this_node, gfp_t gfp_mask)
{
 gfp_t warn_gfp = gfp_mask & ((( gfp_t)((((1UL))) << (___GFP_THISNODE_BIT)))|(( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT))));

 if (warn_gfp != ((( gfp_t)((((1UL))) << (___GFP_THISNODE_BIT)))|(( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)))))
  return;

 if (node_state((this_node), N_ONLINE))
  return;

 ({ do {} while (0); _printk("\001" "4" "%pGg allocation from offline node %d\n", &gfp_mask, this_node); });
 dump_stack();
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *
__alloc_pages_node_noprof(int nid, gfp_t gfp_mask, unsigned int order)
{
 do { if (__builtin_expect(!!(nid < 0 || nid >= (1 << 0)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/gfp.h", 266, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 warn_if_node_offline(nid, gfp_mask);

 return __alloc_pages_noprof(gfp_mask, order, nid, ((void *)0));
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid)
{
 do { if (__builtin_expect(!!(nid < 0 || nid >= (1 << 0)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/gfp.h", 277, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 warn_if_node_offline(nid, gfp);

 return __folio_alloc_noprof(gfp, order, nid, ((void *)0));
}
# 290 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
         unsigned int order)
{
 if (nid == (-1))
  nid = numa_mem_id();

 return __alloc_pages_node_noprof(nid, gfp_mask, order);
}
# 311 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order)
{
 return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
  struct mempolicy *mpol, unsigned long ilx, int nid)
{
 return alloc_pages_noprof(gfp, order);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order)
{
 return ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(__folio_alloc_node_noprof(gfp, order, numa_node_id())) _res = __folio_alloc_node_noprof(gfp, order, numa_node_id()); do {} while (0); _res; }); });
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
  struct mempolicy *mpol, unsigned long ilx, int nid)
{
 return folio_alloc_noprof(gfp, order);
}
# 341 "../include/linux/gfp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *alloc_page_vma_noprof(gfp_t gfp,
  struct vm_area_struct *vma, unsigned long addr)
{
 struct folio *folio = folio_alloc_noprof(gfp, 0);

 return &folio->page;
}


extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order);


extern unsigned long get_zeroed_page_noprof(gfp_t gfp_mask);


void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));


void free_pages_exact(void *virt, size_t size);

__attribute__((__section__(".init.text"))) __attribute__((__cold__)) void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mask) __attribute__((__alloc_size__(2))) __attribute__((__malloc__));
# 371 "../include/linux/gfp.h"
extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);

struct page_frag_cache;
void page_frag_cache_drain(struct page_frag_cache *nc);
extern void __page_frag_cache_drain(struct page *page, unsigned int count);
void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
         gfp_t gfp_mask, unsigned int align_mask);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *page_frag_alloc_align(struct page_frag_cache *nc,
       unsigned int fragsz, gfp_t gfp_mask,
       unsigned int align)
{
 ({ bool __ret_do_once = !!(!is_power_of_2(align)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/gfp.h", 384, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *page_frag_alloc(struct page_frag_cache *nc,
        unsigned int fragsz, gfp_t gfp_mask)
{
 return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
}

extern void page_frag_free(void *addr);




void page_alloc_init_cpuhp(void);
int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp);
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp);
void drain_all_pages(struct zone *zone);
void drain_local_pages(struct zone *zone);

void page_alloc_init_late(void);
void setup_pcp_cacheinfo(unsigned int cpu);
# 415 "../include/linux/gfp.h"
extern gfp_t gfp_allowed_mask;


bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gfp_has_io_fs(gfp_t gfp)
{
 return (gfp & ((( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))))) == ((( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gfp_compaction_allowed(gfp_t gfp_mask)
{
 return 1 && (gfp_mask & (( gfp_t)((((1UL))) << (___GFP_IO_BIT))));
}

extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma);



extern int alloc_contig_range_noprof(unsigned long start, unsigned long end,
         unsigned migratetype, gfp_t gfp_mask);


extern struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask,
           int nid, nodemask_t *nodemask);



void free_contig_range(unsigned long pfn, unsigned long nr_pages);
# 17 "../include/linux/xarray.h" 2




# 1 "../include/linux/sched/mm.h" 1
# 10 "../include/linux/sched/mm.h"
# 1 "../include/linux/sync_core.h" 1
# 15 "../include/linux/sync_core.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sync_core_before_usermode(void)
{
}
# 30 "../include/linux/sync_core.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void prepare_sync_core_cmd(struct mm_struct *mm)
{
}
# 11 "../include/linux/sched/mm.h" 2
# 1 "../include/linux/sched/coredump.h" 1
# 17 "../include/linux/sched/coredump.h"
extern void set_dumpable(struct mm_struct *mm, int value);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __get_dumpable(unsigned long mm_flags)
{
 return mm_flags & ((1 << 2) - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_dumpable(struct mm_struct *mm)
{
 return __get_dumpable(mm->flags);
}
# 102 "../include/linux/sched/coredump.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long mmf_init_flags(unsigned long flags)
{
 if (flags & (1UL << 29))
  flags &= ~((1UL << 28) |
      (1UL << 29));
 return flags & (((1 << 2) - 1) | (((1 << 9) - 1) << 2) | (1 << 24) | (1 << 28) | (1 << 30) | (1 << 31));
}
# 12 "../include/linux/sched/mm.h" 2




extern struct mm_struct *mm_alloc(void);
# 35 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmgrab(struct mm_struct *mm)
{
 atomic_inc(&mm->mm_count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_mb__after_mmgrab(void)
{
 __asm__ __volatile__("": : :"memory");
}

extern void __mmdrop(struct mm_struct *mm);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmdrop(struct mm_struct *mm)
{





 if (__builtin_expect(!!(atomic_dec_and_test(&mm->mm_count)), 0))
  __mmdrop(mm);
}
# 81 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmdrop_sched(struct mm_struct *mm)
{
 mmdrop(mm);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmgrab_lazy_tlb(struct mm_struct *mm)
{
 if (1)
  mmgrab(mm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmdrop_lazy_tlb(struct mm_struct *mm)
{
 if (1) {
  mmdrop(mm);
 } else {




  __asm__ __volatile__("": : :"memory");
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmdrop_lazy_tlb_sched(struct mm_struct *mm)
{
 if (1)
  mmdrop_sched(mm);
 else
  __asm__ __volatile__("": : :"memory");
}
# 131 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmget(struct mm_struct *mm)
{
 atomic_inc(&mm->mm_users);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mmget_not_zero(struct mm_struct *mm)
{
 return atomic_inc_not_zero(&mm->mm_users);
}


extern void mmput(struct mm_struct *);




void mmput_async(struct mm_struct *);



extern struct mm_struct *get_task_mm(struct task_struct *task);





extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode);

extern void exit_mm_release(struct task_struct *, struct mm_struct *);

extern void exec_mm_release(struct task_struct *, struct mm_struct *);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_update_next_owner(struct mm_struct *mm)
{
}
# 180 "../include/linux/sched/mm.h"
extern void arch_pick_mmap_layout(struct mm_struct *mm,
      struct rlimit *rlim_stack);
extern unsigned long
arch_get_unmapped_area(struct file *, unsigned long, unsigned long,
         unsigned long, unsigned long);
extern unsigned long
arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
     unsigned long len, unsigned long pgoff,
     unsigned long flags);

unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp,
       unsigned long addr, unsigned long len,
       unsigned long pgoff, unsigned long flags);

unsigned long
arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
          unsigned long len, unsigned long pgoff,
          unsigned long flags, vm_flags_t vm_flags);
unsigned long
arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr,
           unsigned long len, unsigned long pgoff,
           unsigned long flags, vm_flags_t);

unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm,
        struct file *filp,
        unsigned long addr,
        unsigned long len,
        unsigned long pgoff,
        unsigned long flags,
        vm_flags_t vm_flags);

unsigned long
generic_get_unmapped_area(struct file *filp, unsigned long addr,
     unsigned long len, unsigned long pgoff,
     unsigned long flags);
unsigned long
generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
      unsigned long len, unsigned long pgoff,
      unsigned long flags);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool in_vfork(struct task_struct *tsk)
{
 bool ret;
# 243 "../include/linux/sched/mm.h"
 rcu_read_lock();
 ret = tsk->vfork_done &&
   ({ typeof(*(tsk->real_parent)) *__UNIQUE_ID_rcu128 = (typeof(*(tsk->real_parent)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_129(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((tsk->real_parent)) == sizeof(char) || sizeof((tsk->real_parent)) == sizeof(short) || sizeof((tsk->real_parent)) == sizeof(int) || sizeof((tsk->real_parent)) == sizeof(long)) || sizeof((tsk->real_parent)) == sizeof(long long))) __compiletime_assert_129(); } while (0); (*(const volatile typeof( _Generic(((tsk->real_parent)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((tsk->real_parent)))) *)&((tsk->real_parent))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/sched/mm.h", 245, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(tsk->real_parent)) *)(__UNIQUE_ID_rcu128)); })->mm == tsk->mm;
 rcu_read_unlock();

 return ret;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t current_gfp_context(gfp_t flags)
{
 unsigned int pflags = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_130(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((__current_thread_info->task)->flags) == sizeof(char) || sizeof((__current_thread_info->task)->flags) == sizeof(short) || sizeof((__current_thread_info->task)->flags) == sizeof(int) || sizeof((__current_thread_info->task)->flags) == sizeof(long)) || sizeof((__current_thread_info->task)->flags) == sizeof(long long))) __compiletime_assert_130(); } while (0); (*(const volatile typeof( _Generic(((__current_thread_info->task)->flags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((__current_thread_info->task)->flags))) *)&((__current_thread_info->task)->flags)); });

 if (__builtin_expect(!!(pflags & (0x00080000 | 0x00040000 | 0x00800000 | 0x01000000 | 0x10000000)), 0)) {
# 270 "../include/linux/sched/mm.h"
  if (pflags & 0x00800000)
   flags &= ~(( gfp_t)((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT)));
  else if (pflags & 0x00080000)
   flags &= ~((( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))));
  else if (pflags & 0x00040000)
   flags &= ~(( gfp_t)((((1UL))) << (___GFP_FS_BIT)));

  if (pflags & 0x01000000)
   flags |= (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)));

  if (pflags & 0x10000000)
   flags &= ~(( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT)));
 }
 return flags;
}


extern void __fs_reclaim_acquire(unsigned long ip);
extern void __fs_reclaim_release(unsigned long ip);
extern void fs_reclaim_acquire(gfp_t gfp_mask);
extern void fs_reclaim_release(gfp_t gfp_mask);
# 305 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_retry_wait(gfp_t gfp_flags)
{




 do { do { } while (0); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_131(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((__current_thread_info->task)->__state) == sizeof(char) || sizeof((__current_thread_info->task)->__state) == sizeof(short) || sizeof((__current_thread_info->task)->__state) == sizeof(int) || sizeof((__current_thread_info->task)->__state) == sizeof(long)) || sizeof((__current_thread_info->task)->__state) == sizeof(long long))) __compiletime_assert_131(); } while (0); do { *(volatile typeof((__current_thread_info->task)->__state) *)&((__current_thread_info->task)->__state) = ((0x00000002)); } while (0); } while (0); } while (0);
 gfp_flags = current_gfp_context(gfp_flags);
 if (gfpflags_allow_blocking(gfp_flags) &&
     !(gfp_flags & (( gfp_t)((((1UL))) << (___GFP_NORETRY_BIT)))))

  io_schedule_timeout(1);
 else



  io_schedule_timeout(300/50);
}
# 332 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void might_alloc(gfp_t gfp_mask)
{
 fs_reclaim_acquire(gfp_mask);
 fs_reclaim_release(gfp_mask);

 do { if (gfpflags_allow_blocking(gfp_mask)) do { do { } while (0); } while (0); } while (0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned memalloc_flags_save(unsigned flags)
{
 unsigned oldflags = ~(__current_thread_info->task)->flags & flags;
 (__current_thread_info->task)->flags |= flags;
 return oldflags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_flags_restore(unsigned flags)
{
 (__current_thread_info->task)->flags &= ~flags;
}
# 370 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int memalloc_noio_save(void)
{
 return memalloc_flags_save(0x00080000);
}
# 383 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_noio_restore(unsigned int flags)
{
 memalloc_flags_restore(flags);
}
# 400 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int memalloc_nofs_save(void)
{
 return memalloc_flags_save(0x00040000);
}
# 413 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_nofs_restore(unsigned int flags)
{
 memalloc_flags_restore(flags);
}
# 441 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int memalloc_noreclaim_save(void)
{
 return memalloc_flags_save(0x00000800);
}
# 454 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_noreclaim_restore(unsigned int flags)
{
 memalloc_flags_restore(flags);
}
# 469 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int memalloc_pin_save(void)
{
 return memalloc_flags_save(0x10000000);
}
# 482 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memalloc_pin_restore(unsigned int flags)
{
 memalloc_flags_restore(flags);
}
# 520 "../include/linux/sched/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *
set_active_memcg(struct mem_cgroup *memcg)
{
 return ((void *)0);
}



enum {
 MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY = (1U << 0),
 MEMBARRIER_STATE_PRIVATE_EXPEDITED = (1U << 1),
 MEMBARRIER_STATE_GLOBAL_EXPEDITED_READY = (1U << 2),
 MEMBARRIER_STATE_GLOBAL_EXPEDITED = (1U << 3),
 MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY = (1U << 4),
 MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE = (1U << 5),
 MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY = (1U << 6),
 MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ = (1U << 7),
};

enum {
 MEMBARRIER_FLAG_SYNC_CORE = (1U << 0),
 MEMBARRIER_FLAG_RSEQ = (1U << 1),
};





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
{
 if ((__current_thread_info->task)->mm != mm)
  return;
 if (__builtin_expect(!!(!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE)), 1))

  return;
 sync_core_before_usermode();
}

extern void membarrier_exec_mmap(struct mm_struct *mm);

extern void membarrier_update_current_mm(struct mm_struct *next_mm);
# 22 "../include/linux/xarray.h" 2



struct list_lru;
# 58 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_mk_value(unsigned long v)
{
 ({ int __ret_warn_on = !!((long)v < 0); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/xarray.h", 60, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 return (void *)((v << 1) | 1);
}
# 71 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long xa_to_value(const void *entry)
{
 return (unsigned long)entry >> 1;
}
# 83 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_value(const void *entry)
{
 return (unsigned long)entry & 1;
}
# 101 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_tag_pointer(void *p, unsigned long tag)
{
 return (void *)((unsigned long)p | tag);
}
# 116 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_untag_pointer(void *entry)
{
 return (void *)((unsigned long)entry & ~3UL);
}
# 131 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int xa_pointer_tag(void *entry)
{
 return (unsigned long)entry & 3UL;
}
# 149 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_mk_internal(unsigned long v)
{
 return (void *)((v << 2) | 2);
}
# 161 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long xa_to_internal(const void *entry)
{
 return (unsigned long)entry >> 2;
}
# 173 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_internal(const void *entry)
{
 return ((unsigned long)entry & 3) == 2;
}
# 189 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_zero(const void *entry)
{
 return __builtin_expect(!!(entry == xa_mk_internal(257)), 0);
}
# 205 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_err(const void *entry)
{
 return __builtin_expect(!!(xa_is_internal(entry) && entry >= xa_mk_internal(-4095)), 0);

}
# 223 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xa_err(void *entry)
{

 if (xa_is_err(entry))
  return (long)entry >> 2;
 return 0;
}
# 243 "../include/linux/xarray.h"
struct xa_limit {
 u32 max;
 u32 min;
};







typedef unsigned xa_mark_t;







enum xa_lock_type {
 XA_LOCK_IRQ = 1,
 XA_LOCK_BH = 2,
};
# 300 "../include/linux/xarray.h"
struct xarray {
 spinlock_t xa_lock;

 gfp_t xa_flags;
 void * xa_head;
};
# 355 "../include/linux/xarray.h"
void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void *xa_erase(struct xarray *, unsigned long index);
void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,
   void *entry, gfp_t);
bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);
void xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);
void xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);
void *xa_find(struct xarray *xa, unsigned long *index,
  unsigned long max, xa_mark_t) __attribute__((nonnull(2)));
void *xa_find_after(struct xarray *xa, unsigned long *index,
  unsigned long max, xa_mark_t) __attribute__((nonnull(2)));
unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
  unsigned long max, unsigned int n, xa_mark_t);
void xa_destroy(struct xarray *);
# 382 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xa_init_flags(struct xarray *xa, gfp_t flags)
{
 do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&xa->xa_lock), "&xa->xa_lock", &__key, LD_WAIT_CONFIG); } while (0);
 xa->xa_flags = flags;
 xa->xa_head = ((void *)0);
}
# 397 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xa_init(struct xarray *xa)
{
 xa_init_flags(xa, 0);
}
# 409 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_empty(const struct xarray *xa)
{
 return xa->xa_head == ((void *)0);
}
# 422 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_marked(const struct xarray *xa, xa_mark_t mark)
{
 return xa->xa_flags & (( gfp_t)((1U << ___GFP_LAST_BIT) << ( unsigned)(mark)));
}
# 562 "../include/linux/xarray.h"
void *__xa_erase(struct xarray *, unsigned long index);
void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
  void *entry, gfp_t);
int __attribute__((__warn_unused_result__)) __xa_insert(struct xarray *, unsigned long index,
  void *entry, gfp_t);
int __attribute__((__warn_unused_result__)) __xa_alloc(struct xarray *, u32 *id, void *entry,
  struct xa_limit, gfp_t);
int __attribute__((__warn_unused_result__)) __xa_alloc_cyclic(struct xarray *, u32 *id, void *entry,
  struct xa_limit, u32 *next, gfp_t);
void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);
void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);
# 589 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_store_bh(struct xarray *xa, unsigned long index,
  void *entry, gfp_t gfp)
{
 void *curr;

 might_alloc(gfp);
 spin_lock_bh(&(xa)->xa_lock);
 curr = __xa_store(xa, index, entry, gfp);
 spin_unlock_bh(&(xa)->xa_lock);

 return curr;
}
# 616 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_store_irq(struct xarray *xa, unsigned long index,
  void *entry, gfp_t gfp)
{
 void *curr;

 might_alloc(gfp);
 spin_lock_irq(&(xa)->xa_lock);
 curr = __xa_store(xa, index, entry, gfp);
 spin_unlock_irq(&(xa)->xa_lock);

 return curr;
}
# 642 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_erase_bh(struct xarray *xa, unsigned long index)
{
 void *entry;

 spin_lock_bh(&(xa)->xa_lock);
 entry = __xa_erase(xa, index);
 spin_unlock_bh(&(xa)->xa_lock);

 return entry;
}
# 666 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_erase_irq(struct xarray *xa, unsigned long index)
{
 void *entry;

 spin_lock_irq(&(xa)->xa_lock);
 entry = __xa_erase(xa, index);
 spin_unlock_irq(&(xa)->xa_lock);

 return entry;
}
# 692 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_cmpxchg(struct xarray *xa, unsigned long index,
   void *old, void *entry, gfp_t gfp)
{
 void *curr;

 might_alloc(gfp);
 spin_lock(&(xa)->xa_lock);
 curr = __xa_cmpxchg(xa, index, old, entry, gfp);
 spin_unlock(&(xa)->xa_lock);

 return curr;
}
# 720 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_cmpxchg_bh(struct xarray *xa, unsigned long index,
   void *old, void *entry, gfp_t gfp)
{
 void *curr;

 might_alloc(gfp);
 spin_lock_bh(&(xa)->xa_lock);
 curr = __xa_cmpxchg(xa, index, old, entry, gfp);
 spin_unlock_bh(&(xa)->xa_lock);

 return curr;
}
# 748 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index,
   void *old, void *entry, gfp_t gfp)
{
 void *curr;

 might_alloc(gfp);
 spin_lock_irq(&(xa)->xa_lock);
 curr = __xa_cmpxchg(xa, index, old, entry, gfp);
 spin_unlock_irq(&(xa)->xa_lock);

 return curr;
}
# 778 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) xa_insert(struct xarray *xa,
  unsigned long index, void *entry, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock(&(xa)->xa_lock);
 err = __xa_insert(xa, index, entry, gfp);
 spin_unlock(&(xa)->xa_lock);

 return err;
}
# 808 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) xa_insert_bh(struct xarray *xa,
  unsigned long index, void *entry, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_bh(&(xa)->xa_lock);
 err = __xa_insert(xa, index, entry, gfp);
 spin_unlock_bh(&(xa)->xa_lock);

 return err;
}
# 838 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) xa_insert_irq(struct xarray *xa,
  unsigned long index, void *entry, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_irq(&(xa)->xa_lock);
 err = __xa_insert(xa, index, entry, gfp);
 spin_unlock_irq(&(xa)->xa_lock);

 return err;
}
# 871 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) int xa_alloc(struct xarray *xa, u32 *id,
  void *entry, struct xa_limit limit, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock(&(xa)->xa_lock);
 err = __xa_alloc(xa, id, entry, limit, gfp);
 spin_unlock(&(xa)->xa_lock);

 return err;
}
# 904 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) xa_alloc_bh(struct xarray *xa, u32 *id,
  void *entry, struct xa_limit limit, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_bh(&(xa)->xa_lock);
 err = __xa_alloc(xa, id, entry, limit, gfp);
 spin_unlock_bh(&(xa)->xa_lock);

 return err;
}
# 937 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) xa_alloc_irq(struct xarray *xa, u32 *id,
  void *entry, struct xa_limit limit, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_irq(&(xa)->xa_lock);
 err = __xa_alloc(xa, id, entry, limit, gfp);
 spin_unlock_irq(&(xa)->xa_lock);

 return err;
}
# 974 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
  struct xa_limit limit, u32 *next, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock(&(xa)->xa_lock);
 err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
 spin_unlock(&(xa)->xa_lock);

 return err;
}
# 1011 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry,
  struct xa_limit limit, u32 *next, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_bh(&(xa)->xa_lock);
 err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
 spin_unlock_bh(&(xa)->xa_lock);

 return err;
}
# 1048 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry,
  struct xa_limit limit, u32 *next, gfp_t gfp)
{
 int err;

 might_alloc(gfp);
 spin_lock_irq(&(xa)->xa_lock);
 err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
 spin_unlock_irq(&(xa)->xa_lock);

 return err;
}
# 1079 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
{
 return xa_err(xa_cmpxchg(xa, index, ((void *)0), xa_mk_internal(257), gfp));
}
# 1097 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)
{
 return xa_err(xa_cmpxchg_bh(xa, index, ((void *)0), xa_mk_internal(257), gfp));
}
# 1115 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)
{
 return xa_err(xa_cmpxchg_irq(xa, index, ((void *)0), xa_mk_internal(257), gfp));
}
# 1130 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xa_release(struct xarray *xa, unsigned long index)
{
 xa_cmpxchg(xa, index, xa_mk_internal(257), ((void *)0), 0);
}
# 1162 "../include/linux/xarray.h"
struct xa_node {
 unsigned char shift;
 unsigned char offset;
 unsigned char count;
 unsigned char nr_values;
 struct xa_node *parent;
 struct xarray *array;
 union {
  struct list_head private_list;
  struct callback_head callback_head;
 };
 void *slots[(1UL << (1 ? 4 : 6))];
 union {
  unsigned long tags[3][((((1UL << (1 ? 4 : 6))) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
  unsigned long marks[3][((((1UL << (1 ? 4 : 6))) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 };
};

void xa_dump(const struct xarray *);
void xa_dump_node(const struct xa_node *);
# 1202 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_head(const struct xarray *xa)
{
 return ({ typeof(*(xa->xa_head)) *__UNIQUE_ID_rcu132 = (typeof(*(xa->xa_head)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_133(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((xa->xa_head)) == sizeof(char) || sizeof((xa->xa_head)) == sizeof(short) || sizeof((xa->xa_head)) == sizeof(int) || sizeof((xa->xa_head)) == sizeof(long)) || sizeof((xa->xa_head)) == sizeof(long long))) __compiletime_assert_133(); } while (0); (*(const volatile typeof( _Generic(((xa->xa_head)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((xa->xa_head)))) *)&((xa->xa_head))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1205, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(xa->xa_head)) *)(__UNIQUE_ID_rcu132)); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_head_locked(const struct xarray *xa)
{
 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1212, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(xa->xa_head)) *)((xa->xa_head))); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_entry(const struct xarray *xa,
    const struct xa_node *node, unsigned int offset)
{
 do { } while (0);
 return ({ typeof(*(node->slots[offset])) *__UNIQUE_ID_rcu134 = (typeof(*(node->slots[offset])) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_135(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((node->slots[offset])) == sizeof(char) || sizeof((node->slots[offset])) == sizeof(short) || sizeof((node->slots[offset])) == sizeof(int) || sizeof((node->slots[offset])) == sizeof(long)) || sizeof((node->slots[offset])) == sizeof(long long))) __compiletime_assert_135(); } while (0); (*(const volatile typeof( _Generic(((node->slots[offset])), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((node->slots[offset])))) *)&((node->slots[offset]))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1221, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(node->slots[offset])) *)(__UNIQUE_ID_rcu134)); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_entry_locked(const struct xarray *xa,
    const struct xa_node *node, unsigned int offset)
{
 do { } while (0);
 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1230, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(node->slots[offset])) *)((node->slots[offset]))); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct xa_node *xa_parent(const struct xarray *xa,
     const struct xa_node *node)
{
 return ({ typeof(*(node->parent)) *__UNIQUE_ID_rcu136 = (typeof(*(node->parent)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_137(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((node->parent)) == sizeof(char) || sizeof((node->parent)) == sizeof(short) || sizeof((node->parent)) == sizeof(int) || sizeof((node->parent)) == sizeof(long)) || sizeof((node->parent)) == sizeof(long long))) __compiletime_assert_137(); } while (0); (*(const volatile typeof( _Generic(((node->parent)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((node->parent)))) *)&((node->parent))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1238, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(node->parent)) *)(__UNIQUE_ID_rcu136)); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct xa_node *xa_parent_locked(const struct xarray *xa,
     const struct xa_node *node)
{
 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&xa->xa_lock)->dep_map)))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/xarray.h", 1246, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(node->parent)) *)((node->parent))); });

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_mk_node(const struct xa_node *node)
{
 return (void *)((unsigned long)node | 2);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct xa_node *xa_to_node(const void *entry)
{
 return (struct xa_node *)((unsigned long)entry - 2);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_node(const void *entry)
{
 return xa_is_internal(entry) && (unsigned long)entry > 4096;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xa_mk_sibling(unsigned int offset)
{
 return xa_mk_internal(offset);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long xa_to_sibling(const void *entry)
{
 return xa_to_internal(entry);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_sibling(const void *entry)
{
 return 0 && xa_is_internal(entry) &&
  (entry < xa_mk_sibling((1UL << (1 ? 4 : 6)) - 1));
}
# 1299 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_retry(const void *entry)
{
 return __builtin_expect(!!(entry == xa_mk_internal(256)), 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xa_is_advanced(const void *entry)
{
 return xa_is_internal(entry) && (entry <= xa_mk_internal(256));
}
# 1327 "../include/linux/xarray.h"
typedef void (*xa_update_node_t)(struct xa_node *node);

void xa_delete_node(struct xa_node *, xa_update_node_t);
# 1348 "../include/linux/xarray.h"
struct xa_state {
 struct xarray *xa;
 unsigned long xa_index;
 unsigned char xa_shift;
 unsigned char xa_sibs;
 unsigned char xa_offset;
 unsigned char xa_pad;
 struct xa_node *xa_node;
 struct xa_node *xa_alloc;
 xa_update_node_t xa_update;
 struct list_lru *xa_lru;
};
# 1429 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xas_error(const struct xa_state *xas)
{
 return xa_err(xas->xa_node);
}
# 1443 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_set_err(struct xa_state *xas, long err)
{
 xas->xa_node = ((struct xa_node *)(((unsigned long)err << 2) | 2UL));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_invalid(const struct xa_state *xas)
{
 return (unsigned long)xas->xa_node & 3;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_valid(const struct xa_state *xas)
{
 return !xas_invalid(xas);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_is_node(const struct xa_state *xas)
{
 return xas_valid(xas) && xas->xa_node;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_not_node(struct xa_node *node)
{
 return ((unsigned long)node & 3) || !node;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_frozen(struct xa_node *node)
{
 return (unsigned long)node & 2;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_top(struct xa_node *node)
{
 return node <= ((struct xa_node *)3UL);
}
# 1509 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_reset(struct xa_state *xas)
{
 xas->xa_node = ((struct xa_node *)3UL);
}
# 1526 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool xas_retry(struct xa_state *xas, const void *entry)
{
 if (xa_is_zero(entry))
  return true;
 if (!xa_is_retry(entry))
  return false;
 xas_reset(xas);
 return true;
}

void *xas_load(struct xa_state *);
void *xas_store(struct xa_state *, void *entry);
void *xas_find(struct xa_state *, unsigned long max);
void *xas_find_conflict(struct xa_state *);

bool xas_get_mark(const struct xa_state *, xa_mark_t);
void xas_set_mark(const struct xa_state *, xa_mark_t);
void xas_clear_mark(const struct xa_state *, xa_mark_t);
void *xas_find_marked(struct xa_state *, unsigned long max, xa_mark_t);
void xas_init_marks(const struct xa_state *);

bool xas_nomem(struct xa_state *, gfp_t);
void xas_destroy(struct xa_state *);
void xas_pause(struct xa_state *);

void xas_create_range(struct xa_state *);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xa_get_order(struct xarray *xa, unsigned long index)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int xas_get_order(struct xa_state *xas)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_split(struct xa_state *xas, void *entry,
  unsigned int order)
{
 xas_store(xas, entry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_split_alloc(struct xa_state *xas, void *entry,
  unsigned int order, gfp_t gfp)
{
}
# 1595 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xas_reload(struct xa_state *xas)
{
 struct xa_node *node = xas->xa_node;
 void *entry;
 char offset;

 if (!node)
  return xa_head(xas->xa);
 if (0) {
  offset = (xas->xa_index >> node->shift) & ((1UL << (1 ? 4 : 6)) - 1);
  entry = xa_entry(xas->xa, node, offset);
  if (!xa_is_sibling(entry))
   return entry;
  offset = xa_to_sibling(entry);
 } else {
  offset = xas->xa_offset;
 }
 return xa_entry(xas->xa, node, offset);
}
# 1624 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_set(struct xa_state *xas, unsigned long index)
{
 xas->xa_index = index;
 xas->xa_node = ((struct xa_node *)3UL);
}
# 1640 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_advance(struct xa_state *xas, unsigned long index)
{
 unsigned char shift = xas_is_node(xas) ? xas->xa_node->shift : 0;

 xas->xa_index = index;
 xas->xa_offset = (index >> shift) & ((1UL << (1 ? 4 : 6)) - 1);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_set_order(struct xa_state *xas, unsigned long index,
     unsigned int order)
{






 do { if (__builtin_expect(!!(order > 0), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/xarray.h", 1663, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 xas_set(xas, index);

}
# 1677 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_set_update(struct xa_state *xas, xa_update_node_t update)
{
 xas->xa_update = update;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void xas_set_lru(struct xa_state *xas, struct list_lru *lru)
{
 xas->xa_lru = lru;
}
# 1698 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xas_next_entry(struct xa_state *xas, unsigned long max)
{
 struct xa_node *node = xas->xa_node;
 void *entry;

 if (__builtin_expect(!!(xas_not_node(node) || node->shift || xas->xa_offset != (xas->xa_index & ((1UL << (1 ? 4 : 6)) - 1))), 0))

  return xas_find(xas, max);

 do {
  if (__builtin_expect(!!(xas->xa_index >= max), 0))
   return xas_find(xas, max);
  if (__builtin_expect(!!(xas->xa_offset == ((1UL << (1 ? 4 : 6)) - 1)), 0))
   return xas_find(xas, max);
  entry = xa_entry(xas->xa, node, xas->xa_offset + 1);
  if (__builtin_expect(!!(xa_is_internal(entry)), 0))
   return xas_find(xas, max);
  xas->xa_offset++;
  xas->xa_index++;
 } while (!entry);

 return entry;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int xas_find_chunk(struct xa_state *xas, bool advance,
  xa_mark_t mark)
{
 unsigned long *addr = xas->xa_node->marks[( unsigned)mark];
 unsigned int offset = xas->xa_offset;

 if (advance)
  offset++;
 if ((1UL << (1 ? 4 : 6)) == 32) {
  if (offset < (1UL << (1 ? 4 : 6))) {
   unsigned long data = *addr & (~0UL << offset);
   if (data)
    return __ffs(data);
  }
  return (1UL << (1 ? 4 : 6));
 }

 return find_next_bit(addr, (1UL << (1 ? 4 : 6)), offset);
}
# 1755 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xas_next_marked(struct xa_state *xas, unsigned long max,
        xa_mark_t mark)
{
 struct xa_node *node = xas->xa_node;
 void *entry;
 unsigned int offset;

 if (__builtin_expect(!!(xas_not_node(node) || node->shift), 0))
  return xas_find_marked(xas, max, mark);
 offset = xas_find_chunk(xas, true, mark);
 xas->xa_offset = offset;
 xas->xa_index = (xas->xa_index & ~((1UL << (1 ? 4 : 6)) - 1)) + offset;
 if (xas->xa_index > max)
  return ((void *)0);
 if (offset == (1UL << (1 ? 4 : 6)))
  return xas_find_marked(xas, max, mark);
 entry = xa_entry(xas->xa, node, offset);
 if (!entry)
  return xas_find_marked(xas, max, mark);
 return entry;
}





enum {
 XA_CHECK_SCHED = 4096,
};
# 1835 "../include/linux/xarray.h"
void *__xas_next(struct xa_state *);
void *__xas_prev(struct xa_state *);
# 1854 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xas_prev(struct xa_state *xas)
{
 struct xa_node *node = xas->xa_node;

 if (__builtin_expect(!!(xas_not_node(node) || node->shift || xas->xa_offset == 0), 0))

  return __xas_prev(xas);

 xas->xa_index--;
 xas->xa_offset--;
 return xa_entry(xas->xa, node, xas->xa_offset);
}
# 1883 "../include/linux/xarray.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xas_next(struct xa_state *xas)
{
 struct xa_node *node = xas->xa_node;

 if (__builtin_expect(!!(xas_not_node(node) || node->shift || xas->xa_offset == ((1UL << (1 ? 4 : 6)) - 1)), 0))

  return __xas_next(xas);

 xas->xa_index++;
 xas->xa_offset++;
 return xa_entry(xas->xa, node, xas->xa_offset);
}
# 8 "../drivers/infiniband/core/ib_core_uverbs.c" 2
# 1 "../drivers/infiniband/core/uverbs.h" 1
# 41 "../drivers/infiniband/core/uverbs.h"
# 1 "../include/linux/idr.h" 1
# 15 "../include/linux/idr.h"
# 1 "../include/linux/radix-tree.h" 1
# 28 "../include/linux/radix-tree.h"
struct radix_tree_preload {
 local_lock_t lock;
 unsigned nr;

 struct xa_node *nodes;
};
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_radix_tree_preloads; extern __attribute__((section(".data" ""))) __typeof__(struct radix_tree_preload) radix_tree_preloads;
# 55 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool radix_tree_is_internal_node(void *ptr)
{
 return ((unsigned long)ptr & 3UL) ==
    2UL;
}
# 86 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool radix_tree_empty(const struct xarray *root)
{
 return root->xa_head == ((void *)0);
}
# 106 "../include/linux/radix-tree.h"
struct radix_tree_iter {
 unsigned long index;
 unsigned long next_index;
 unsigned long tags;
 struct xa_node *node;
};
# 177 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *radix_tree_deref_slot(void **slot)
{
 return ({ typeof(*(*slot)) *__UNIQUE_ID_rcu138 = (typeof(*(*slot)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_139(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((*slot)) == sizeof(char) || sizeof((*slot)) == sizeof(short) || sizeof((*slot)) == sizeof(int) || sizeof((*slot)) == sizeof(long)) || sizeof((*slot)) == sizeof(long long))) __compiletime_assert_139(); } while (0); (*(const volatile typeof( _Generic(((*slot)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((*slot)))) *)&((*slot))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/radix-tree.h", 179, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(*slot)) *)(__UNIQUE_ID_rcu138)); });
}
# 191 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *radix_tree_deref_slot_protected(void **slot,
       spinlock_t *treelock)
{
 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(treelock)->dep_map)))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/radix-tree.h", 194, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(*slot)) *)((*slot))); });
}
# 204 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int radix_tree_deref_retry(void *arg)
{
 return __builtin_expect(!!(radix_tree_is_internal_node(arg)), 0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int radix_tree_exception(void *arg)
{
 return __builtin_expect(!!((unsigned long)arg & 3UL), 0);
}

int radix_tree_insert(struct xarray *, unsigned long index,
   void *);
void *__radix_tree_lookup(const struct xarray *, unsigned long index,
     struct xa_node **nodep, void ***slotp);
void *radix_tree_lookup(const struct xarray *, unsigned long);
void **radix_tree_lookup_slot(const struct xarray *,
     unsigned long index);
void __radix_tree_replace(struct xarray *, struct xa_node *,
     void **slot, void *entry);
void radix_tree_iter_replace(struct xarray *,
  const struct radix_tree_iter *, void **slot, void *entry);
void radix_tree_replace_slot(struct xarray *,
        void **slot, void *entry);
void radix_tree_iter_delete(struct xarray *,
   struct radix_tree_iter *iter, void **slot);
void *radix_tree_delete_item(struct xarray *, unsigned long, void *);
void *radix_tree_delete(struct xarray *, unsigned long);
unsigned int radix_tree_gang_lookup(const struct xarray *,
   void **results, unsigned long first_index,
   unsigned int max_items);
int radix_tree_preload(gfp_t gfp_mask);
int radix_tree_maybe_preload(gfp_t gfp_mask);
void radix_tree_init(void);
void *radix_tree_tag_set(struct xarray *,
   unsigned long index, unsigned int tag);
void *radix_tree_tag_clear(struct xarray *,
   unsigned long index, unsigned int tag);
int radix_tree_tag_get(const struct xarray *,
   unsigned long index, unsigned int tag);
void radix_tree_iter_tag_clear(struct xarray *,
  const struct radix_tree_iter *iter, unsigned int tag);
unsigned int radix_tree_gang_lookup_tag(const struct xarray *,
  void **results, unsigned long first_index,
  unsigned int max_items, unsigned int tag);
unsigned int radix_tree_gang_lookup_tag_slot(const struct xarray *,
  void ***results, unsigned long first_index,
  unsigned int max_items, unsigned int tag);
int radix_tree_tagged(const struct xarray *, unsigned int tag);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void radix_tree_preload_end(void)
{
 do { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&radix_tree_preloads.lock) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&radix_tree_preloads.lock)) *)(&radix_tree_preloads.lock); }); })); __asm__ __volatile__("": : :"memory"); } while (0);
}

void **idr_get_free(struct xarray *root,
         struct radix_tree_iter *iter, gfp_t gfp,
         unsigned long max);

enum {
 RADIX_TREE_ITER_TAG_MASK = 0x0f,
 RADIX_TREE_ITER_TAGGED = 0x10,
 RADIX_TREE_ITER_CONTIG = 0x20,
};
# 280 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void **
radix_tree_iter_init(struct radix_tree_iter *iter, unsigned long start)
{
# 291 "../include/linux/radix-tree.h"
 iter->index = 0;
 iter->next_index = start;
 return ((void *)0);
}
# 309 "../include/linux/radix-tree.h"
void **radix_tree_next_chunk(const struct xarray *,
        struct radix_tree_iter *iter, unsigned flags);
# 322 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void **
radix_tree_iter_lookup(const struct xarray *root,
   struct radix_tree_iter *iter, unsigned long index)
{
 radix_tree_iter_init(iter, index);
 return radix_tree_next_chunk(root, iter, RADIX_TREE_ITER_CONTIG);
}
# 339 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__))
void **radix_tree_iter_retry(struct radix_tree_iter *iter)
{
 iter->next_index = iter->index;
 iter->tags = 0;
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
__radix_tree_iter_add(struct radix_tree_iter *iter, unsigned long slots)
{
 return iter->index + slots;
}
# 363 "../include/linux/radix-tree.h"
void **__attribute__((__warn_unused_result__)) radix_tree_iter_resume(void **slot,
     struct radix_tree_iter *iter);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) long
radix_tree_chunk_size(struct radix_tree_iter *iter)
{
 return iter->next_index - iter->index;
}
# 397 "../include/linux/radix-tree.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void **radix_tree_next_slot(void **slot,
    struct radix_tree_iter *iter, unsigned flags)
{
 if (flags & RADIX_TREE_ITER_TAGGED) {
  iter->tags >>= 1;
  if (__builtin_expect(!!(!iter->tags), 0))
   return ((void *)0);
  if (__builtin_expect(!!(iter->tags & 1ul), 1)) {
   iter->index = __radix_tree_iter_add(iter, 1);
   slot++;
   goto found;
  }
  if (!(flags & RADIX_TREE_ITER_CONTIG)) {
   unsigned offset = __ffs(iter->tags);

   iter->tags >>= offset++;
   iter->index = __radix_tree_iter_add(iter, offset);
   slot += offset;
   goto found;
  }
 } else {
  long count = radix_tree_chunk_size(iter);

  while (--count > 0) {
   slot++;
   iter->index = __radix_tree_iter_add(iter, 1);

   if (__builtin_expect(!!(*slot), 1))
    goto found;
   if (flags & RADIX_TREE_ITER_CONTIG) {

    iter->next_index = 0;
    break;
   }
  }
 }
 return ((void *)0);

 found:
 return slot;
}
# 16 "../include/linux/idr.h" 2



struct idr {
 struct xarray idr_rt;
 unsigned int idr_base;
 unsigned int idr_next;
};
# 66 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int idr_get_cursor(const struct idr *idr)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_140(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(idr->idr_next) == sizeof(char) || sizeof(idr->idr_next) == sizeof(short) || sizeof(idr->idr_next) == sizeof(int) || sizeof(idr->idr_next) == sizeof(long)) || sizeof(idr->idr_next) == sizeof(long long))) __compiletime_assert_140(); } while (0); (*(const volatile typeof( _Generic((idr->idr_next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (idr->idr_next))) *)&(idr->idr_next)); });
}
# 79 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void idr_set_cursor(struct idr *idr, unsigned int val)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_141(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(idr->idr_next) == sizeof(char) || sizeof(idr->idr_next) == sizeof(short) || sizeof(idr->idr_next) == sizeof(int) || sizeof(idr->idr_next) == sizeof(long)) || sizeof(idr->idr_next) == sizeof(long long))) __compiletime_assert_141(); } while (0); do { *(volatile typeof(idr->idr_next) *)&(idr->idr_next) = (val); } while (0); } while (0);
}
# 112 "../include/linux/idr.h"
void idr_preload(gfp_t gfp_mask);

int idr_alloc(struct idr *, void *ptr, int start, int end, gfp_t);
int __attribute__((__warn_unused_result__)) idr_alloc_u32(struct idr *, void *ptr, u32 *id,
    unsigned long max, gfp_t);
int idr_alloc_cyclic(struct idr *, void *ptr, int start, int end, gfp_t);
void *idr_remove(struct idr *, unsigned long id);
void *idr_find(const struct idr *, unsigned long id);
int idr_for_each(const struct idr *,
   int (*fn)(int id, void *p, void *data), void *data);
void *idr_get_next(struct idr *, int *nextid);
void *idr_get_next_ul(struct idr *, unsigned long *nextid);
void *idr_replace(struct idr *, void *, unsigned long id);
void idr_destroy(struct idr *);
# 135 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void idr_init_base(struct idr *idr, int base)
{
 xa_init_flags(&idr->idr_rt, ((( gfp_t)4) | ( gfp_t) (1 << ((___GFP_LAST_BIT) + 0))));
 idr->idr_base = base;
 idr->idr_next = 0;
}
# 149 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void idr_init(struct idr *idr)
{
 idr_init_base(idr, 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool idr_is_empty(const struct idr *idr)
{
 return radix_tree_empty(&idr->idr_rt) &&
  radix_tree_tagged(&idr->idr_rt, 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void idr_preload_end(void)
{
 do { local_lock_release(({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&radix_tree_preloads.lock) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&radix_tree_preloads.lock)) *)(&radix_tree_preloads.lock); }); })); __asm__ __volatile__("": : :"memory"); } while (0);
}
# 242 "../include/linux/idr.h"
struct ida_bitmap {
 unsigned long bitmap[(128 / sizeof(long))];
};

struct ida {
 struct xarray xa;
};
# 257 "../include/linux/idr.h"
int ida_alloc_range(struct ida *, unsigned int min, unsigned int max, gfp_t);
void ida_free(struct ida *, unsigned int id);
void ida_destroy(struct ida *ida);
# 273 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ida_alloc(struct ida *ida, gfp_t gfp)
{
 return ida_alloc_range(ida, 0, ~0, gfp);
}
# 291 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ida_alloc_min(struct ida *ida, unsigned int min, gfp_t gfp)
{
 return ida_alloc_range(ida, min, ~0, gfp);
}
# 309 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ida_alloc_max(struct ida *ida, unsigned int max, gfp_t gfp)
{
 return ida_alloc_range(ida, 0, max, gfp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ida_init(struct ida *ida)
{
 xa_init_flags(&ida->xa, ((( gfp_t)XA_LOCK_IRQ) | ((( gfp_t)4U) | (( gfp_t)((1U << ___GFP_LAST_BIT) << ( unsigned)((( xa_mark_t)0U)))))));
}
# 327 "../include/linux/idr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ida_is_empty(const struct ida *ida)
{
 return xa_empty(&ida->xa);
}
# 42 "../drivers/infiniband/core/uverbs.h" 2


# 1 "../include/linux/cdev.h" 1




# 1 "../include/linux/kobject.h" 1
# 20 "../include/linux/kobject.h"
# 1 "../include/linux/sysfs.h" 1
# 16 "../include/linux/sysfs.h"
# 1 "../include/linux/kernfs.h" 1
# 18 "../include/linux/kernfs.h"
# 1 "../include/linux/uidgid.h" 1
# 16 "../include/linux/uidgid.h"
# 1 "../include/linux/highuid.h" 1
# 35 "../include/linux/highuid.h"
extern int overflowuid;
extern int overflowgid;

extern void __bad_uid(void);
extern void __bad_gid(void);
# 82 "../include/linux/highuid.h"
extern int fs_overflowuid;
extern int fs_overflowgid;
# 17 "../include/linux/uidgid.h" 2

struct user_namespace;
extern struct user_namespace init_user_ns;
struct uid_gid_map;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uid_t __kuid_val(kuid_t uid)
{
 return uid.val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gid_t __kgid_val(kgid_t gid)
{
 return gid.val;
}
# 53 "../include/linux/uidgid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_eq(kuid_t left, kuid_t right)
{
 return __kuid_val(left) == __kuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_eq(kgid_t left, kgid_t right)
{
 return __kgid_val(left) == __kgid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_gt(kuid_t left, kuid_t right)
{
 return __kuid_val(left) > __kuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_gt(kgid_t left, kgid_t right)
{
 return __kgid_val(left) > __kgid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_gte(kuid_t left, kuid_t right)
{
 return __kuid_val(left) >= __kuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_gte(kgid_t left, kgid_t right)
{
 return __kgid_val(left) >= __kgid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_lt(kuid_t left, kuid_t right)
{
 return __kuid_val(left) < __kuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_lt(kgid_t left, kgid_t right)
{
 return __kgid_val(left) < __kgid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_lte(kuid_t left, kuid_t right)
{
 return __kuid_val(left) <= __kuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_lte(kgid_t left, kgid_t right)
{
 return __kgid_val(left) <= __kgid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uid_valid(kuid_t uid)
{
 return __kuid_val(uid) != (uid_t) -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gid_valid(kgid_t gid)
{
 return __kgid_val(gid) != (gid_t) -1;
}
# 138 "../include/linux/uidgid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kuid_t make_kuid(struct user_namespace *from, uid_t uid)
{
 return (kuid_t){ uid };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kgid_t make_kgid(struct user_namespace *from, gid_t gid)
{
 return (kgid_t){ gid };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uid_t from_kuid(struct user_namespace *to, kuid_t kuid)
{
 return __kuid_val(kuid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gid_t from_kgid(struct user_namespace *to, kgid_t kgid)
{
 return __kgid_val(kgid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uid_t from_kuid_munged(struct user_namespace *to, kuid_t kuid)
{
 uid_t uid = from_kuid(to, kuid);
 if (uid == (uid_t)-1)
  uid = overflowuid;
 return uid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gid_t from_kgid_munged(struct user_namespace *to, kgid_t kgid)
{
 gid_t gid = from_kgid(to, kgid);
 if (gid == (gid_t)-1)
  gid = overflowgid;
 return gid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kuid_has_mapping(struct user_namespace *ns, kuid_t uid)
{
 return uid_valid(uid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kgid_has_mapping(struct user_namespace *ns, kgid_t gid)
{
 return gid_valid(gid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 map_id_down(struct uid_gid_map *map, u32 id)
{
 return id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 map_id_up(struct uid_gid_map *map, u32 id)
{
 return id;
}
# 19 "../include/linux/kernfs.h" 2




struct file;
struct dentry;
struct iattr;
struct seq_file;
struct vm_area_struct;
struct vm_operations_struct;
struct super_block;
struct file_system_type;
struct poll_table_struct;
struct fs_context;

struct kernfs_fs_context;
struct kernfs_open_node;
struct kernfs_iattrs;
# 90 "../include/linux/kernfs.h"
struct kernfs_global_locks {
 struct mutex open_file_mutex[(1 << 1)];
};

enum kernfs_node_type {
 KERNFS_DIR = 0x0001,
 KERNFS_FILE = 0x0002,
 KERNFS_LINK = 0x0004,
};






enum kernfs_node_flag {
 KERNFS_ACTIVATED = 0x0010,
 KERNFS_NS = 0x0020,
 KERNFS_HAS_SEQ_SHOW = 0x0040,
 KERNFS_HAS_MMAP = 0x0080,
 KERNFS_LOCKDEP = 0x0100,
 KERNFS_HIDDEN = 0x0200,
 KERNFS_SUICIDAL = 0x0400,
 KERNFS_SUICIDED = 0x0800,
 KERNFS_EMPTY_DIR = 0x1000,
 KERNFS_HAS_RELEASE = 0x2000,
 KERNFS_REMOVING = 0x4000,
};


enum kernfs_root_flag {






 KERNFS_ROOT_CREATE_DEACTIVATED = 0x0001,
# 138 "../include/linux/kernfs.h"
 KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK = 0x0002,





 KERNFS_ROOT_SUPPORT_EXPORTOP = 0x0004,




 KERNFS_ROOT_SUPPORT_USER_XATTR = 0x0008,
};


struct kernfs_elem_dir {
 unsigned long subdirs;

 struct rb_root children;





 struct kernfs_root *root;




 unsigned long rev;
};

struct kernfs_elem_symlink {
 struct kernfs_node *target_kn;
};

struct kernfs_elem_attr {
 const struct kernfs_ops *ops;
 struct kernfs_open_node *open;
 loff_t size;
 struct kernfs_node *notify_next;
};
# 190 "../include/linux/kernfs.h"
struct kernfs_node {
 atomic_t count;
 atomic_t active;

 struct lockdep_map dep_map;







 struct kernfs_node *parent;
 const char *name;

 struct rb_node rb;

 const void *ns;
 unsigned int hash;
 unsigned short flags;
 umode_t mode;

 union {
  struct kernfs_elem_dir dir;
  struct kernfs_elem_symlink symlink;
  struct kernfs_elem_attr attr;
 };





 u64 id;

 void *priv;
 struct kernfs_iattrs *iattr;

 struct callback_head rcu;
};
# 237 "../include/linux/kernfs.h"
struct kernfs_syscall_ops {
 int (*show_options)(struct seq_file *sf, struct kernfs_root *root);

 int (*mkdir)(struct kernfs_node *parent, const char *name,
       umode_t mode);
 int (*rmdir)(struct kernfs_node *kn);
 int (*rename)(struct kernfs_node *kn, struct kernfs_node *new_parent,
        const char *new_name);
 int (*show_path)(struct seq_file *sf, struct kernfs_node *kn,
    struct kernfs_root *root);
};

struct kernfs_node *kernfs_root_to_node(struct kernfs_root *root);

struct kernfs_open_file {

 struct kernfs_node *kn;
 struct file *file;
 struct seq_file *seq_file;
 void *priv;


 struct mutex mutex;
 struct mutex prealloc_mutex;
 int event;
 struct list_head list;
 char *prealloc_buf;

 size_t atomic_write_len;
 bool mmapped:1;
 bool released:1;
 const struct vm_operations_struct *vm_ops;
};

struct kernfs_ops {




 int (*open)(struct kernfs_open_file *of);
 void (*release)(struct kernfs_open_file *of);
# 290 "../include/linux/kernfs.h"
 int (*seq_show)(struct seq_file *sf, void *v);

 void *(*seq_start)(struct seq_file *sf, loff_t *ppos);
 void *(*seq_next)(struct seq_file *sf, void *v, loff_t *ppos);
 void (*seq_stop)(struct seq_file *sf, void *v);

 ssize_t (*read)(struct kernfs_open_file *of, char *buf, size_t bytes,
   loff_t off);
# 306 "../include/linux/kernfs.h"
 size_t atomic_write_len;






 bool prealloc;
 ssize_t (*write)(struct kernfs_open_file *of, char *buf, size_t bytes,
    loff_t off);

 __poll_t (*poll)(struct kernfs_open_file *of,
    struct poll_table_struct *pt);

 int (*mmap)(struct kernfs_open_file *of, struct vm_area_struct *vma);
 loff_t (*llseek)(struct kernfs_open_file *of, loff_t offset, int whence);
};




struct kernfs_fs_context {
 struct kernfs_root *root;
 void *ns_tag;
 unsigned long magic;


 bool new_sb_created;
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum kernfs_node_type kernfs_type(struct kernfs_node *kn)
{
 return kn->flags & 0x000f;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ino_t kernfs_id_ino(u64 id)
{

 if (sizeof(ino_t) >= sizeof(u64))
  return id;
 else
  return (u32)id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 kernfs_id_gen(u64 id)
{

 if (sizeof(ino_t) >= sizeof(u64))
  return 1;
 else
  return id >> 32;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ino_t kernfs_ino(struct kernfs_node *kn)
{
 return kernfs_id_ino(kn->id);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ino_t kernfs_gen(struct kernfs_node *kn)
{
 return kernfs_id_gen(kn->id);
}
# 379 "../include/linux/kernfs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kernfs_enable_ns(struct kernfs_node *kn)
{
 ({ bool __ret_do_once = !!(kernfs_type(kn) != KERNFS_DIR); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/kernfs.h", 381, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 ({ bool __ret_do_once = !!(!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_142(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((&kn->dir.children)->rb_node) == sizeof(char) || sizeof((&kn->dir.children)->rb_node) == sizeof(short) || sizeof((&kn->dir.children)->rb_node) == sizeof(int) || sizeof((&kn->dir.children)->rb_node) == sizeof(long)) || sizeof((&kn->dir.children)->rb_node) == sizeof(long long))) __compiletime_assert_142(); } while (0); (*(const volatile typeof( _Generic(((&kn->dir.children)->rb_node), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((&kn->dir.children)->rb_node))) *)&((&kn->dir.children)->rb_node)); }) == ((void *)0))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/kernfs.h", 382, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 kn->flags |= KERNFS_NS;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kernfs_ns_enabled(struct kernfs_node *kn)
{
 return kn->flags & KERNFS_NS;
}

int kernfs_name(struct kernfs_node *kn, char *buf, size_t buflen);
int kernfs_path_from_node(struct kernfs_node *root_kn, struct kernfs_node *kn,
     char *buf, size_t buflen);
void pr_cont_kernfs_name(struct kernfs_node *kn);
void pr_cont_kernfs_path(struct kernfs_node *kn);
struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn);
struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent,
        const char *name, const void *ns);
struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent,
        const char *path, const void *ns);
void kernfs_get(struct kernfs_node *kn);
void kernfs_put(struct kernfs_node *kn);

struct kernfs_node *kernfs_node_from_dentry(struct dentry *dentry);
struct kernfs_root *kernfs_root_from_sb(struct super_block *sb);
struct inode *kernfs_get_inode(struct super_block *sb, struct kernfs_node *kn);

struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
      struct super_block *sb);
struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops,
           unsigned int flags, void *priv);
void kernfs_destroy_root(struct kernfs_root *root);

struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
      const char *name, umode_t mode,
      kuid_t uid, kgid_t gid,
      void *priv, const void *ns);
struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
         const char *name);
struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
      const char *name, umode_t mode,
      kuid_t uid, kgid_t gid,
      loff_t size,
      const struct kernfs_ops *ops,
      void *priv, const void *ns,
      struct lock_class_key *key);
struct kernfs_node *kernfs_create_link(struct kernfs_node *parent,
           const char *name,
           struct kernfs_node *target);
void kernfs_activate(struct kernfs_node *kn);
void kernfs_show(struct kernfs_node *kn, bool show);
void kernfs_remove(struct kernfs_node *kn);
void kernfs_break_active_protection(struct kernfs_node *kn);
void kernfs_unbreak_active_protection(struct kernfs_node *kn);
bool kernfs_remove_self(struct kernfs_node *kn);
int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
        const void *ns);
int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
       const char *new_name, const void *new_ns);
int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr);
__poll_t kernfs_generic_poll(struct kernfs_open_file *of,
        struct poll_table_struct *pt);
void kernfs_notify(struct kernfs_node *kn);

int kernfs_xattr_get(struct kernfs_node *kn, const char *name,
       void *value, size_t size);
int kernfs_xattr_set(struct kernfs_node *kn, const char *name,
       const void *value, size_t size, int flags);

const void *kernfs_super_ns(struct super_block *sb);
int kernfs_get_tree(struct fs_context *fc);
void kernfs_free_fs_context(struct fs_context *fc);
void kernfs_kill_sb(struct super_block *sb);

void kernfs_init(void);

struct kernfs_node *kernfs_find_and_get_node_by_id(struct kernfs_root *root,
         u64 id);
# 596 "../include/linux/kernfs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kernfs_path(struct kernfs_node *kn, char *buf, size_t buflen)
{
 return kernfs_path_from_node(kn, ((void *)0), buf, buflen);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kernfs_node *
kernfs_find_and_get(struct kernfs_node *kn, const char *name)
{
 return kernfs_find_and_get_ns(kn, name, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kernfs_node *
kernfs_walk_and_get(struct kernfs_node *kn, const char *path)
{
 return kernfs_walk_and_get_ns(kn, path, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kernfs_node *
kernfs_create_dir(struct kernfs_node *parent, const char *name, umode_t mode,
    void *priv)
{
 return kernfs_create_dir_ns(parent, name, mode,
        (kuid_t){ 0 }, (kgid_t){ 0 },
        priv, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kernfs_remove_by_name(struct kernfs_node *parent,
     const char *name)
{
 return kernfs_remove_by_name_ns(parent, name, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kernfs_rename(struct kernfs_node *kn,
    struct kernfs_node *new_parent,
    const char *new_name)
{
 return kernfs_rename_ns(kn, new_parent, new_name, ((void *)0));
}
# 17 "../include/linux/sysfs.h" 2




# 1 "../include/linux/kobject_ns.h" 1
# 19 "../include/linux/kobject_ns.h"
struct sock;
struct kobject;





enum kobj_ns_type {
 KOBJ_NS_TYPE_NONE = 0,
 KOBJ_NS_TYPE_NET,
 KOBJ_NS_TYPES
};
# 39 "../include/linux/kobject_ns.h"
struct kobj_ns_type_operations {
 enum kobj_ns_type type;
 bool (*current_may_mount)(void);
 void *(*grab_current_ns)(void);
 const void *(*netlink_ns)(struct sock *sk);
 const void *(*initial_ns)(void);
 void (*drop_ns)(void *);
};

int kobj_ns_type_register(const struct kobj_ns_type_operations *ops);
int kobj_ns_type_registered(enum kobj_ns_type type);
const struct kobj_ns_type_operations *kobj_child_ns_ops(const struct kobject *parent);
const struct kobj_ns_type_operations *kobj_ns_ops(const struct kobject *kobj);

bool kobj_ns_current_may_mount(enum kobj_ns_type type);
void *kobj_ns_grab_current(enum kobj_ns_type type);
const void *kobj_ns_netlink(enum kobj_ns_type type, struct sock *sk);
const void *kobj_ns_initial(enum kobj_ns_type type);
void kobj_ns_drop(enum kobj_ns_type type, void *ns);
# 22 "../include/linux/sysfs.h" 2
# 1 "../include/linux/stat.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/stat.h" 1
# 1 "../include/uapi/asm-generic/stat.h" 1
# 20 "../include/uapi/asm-generic/stat.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 21 "../include/uapi/asm-generic/stat.h" 2



struct stat {
 unsigned long st_dev;
 unsigned long st_ino;
 unsigned int st_mode;
 unsigned int st_nlink;
 unsigned int st_uid;
 unsigned int st_gid;
 unsigned long st_rdev;
 unsigned long __pad1;
 long st_size;
 int st_blksize;
 int __pad2;
 long st_blocks;
 long st_atime;
 unsigned long st_atime_nsec;
 long st_mtime;
 unsigned long st_mtime_nsec;
 long st_ctime;
 unsigned long st_ctime_nsec;
 unsigned int __unused4;
 unsigned int __unused5;
};



struct stat64 {
 unsigned long long st_dev;
 unsigned long long st_ino;
 unsigned int st_mode;
 unsigned int st_nlink;
 unsigned int st_uid;
 unsigned int st_gid;
 unsigned long long st_rdev;
 unsigned long long __pad1;
 long long st_size;
 int st_blksize;
 int __pad2;
 long long st_blocks;
 int st_atime;
 unsigned int st_atime_nsec;
 int st_mtime;
 unsigned int st_mtime_nsec;
 int st_ctime;
 unsigned int st_ctime_nsec;
 unsigned int __unused4;
 unsigned int __unused5;
};
# 2 "./arch/hexagon/include/generated/uapi/asm/stat.h" 2
# 7 "../include/linux/stat.h" 2
# 1 "../include/uapi/linux/stat.h" 1
# 56 "../include/uapi/linux/stat.h"
struct statx_timestamp {
 __s64 tv_sec;
 __u32 tv_nsec;
 __s32 __reserved;
};
# 99 "../include/uapi/linux/stat.h"
struct statx {

 __u32 stx_mask;
 __u32 stx_blksize;
 __u64 stx_attributes;

 __u32 stx_nlink;
 __u32 stx_uid;
 __u32 stx_gid;
 __u16 stx_mode;
 __u16 __spare0[1];

 __u64 stx_ino;
 __u64 stx_size;
 __u64 stx_blocks;
 __u64 stx_attributes_mask;

 struct statx_timestamp stx_atime;
 struct statx_timestamp stx_btime;
 struct statx_timestamp stx_ctime;
 struct statx_timestamp stx_mtime;

 __u32 stx_rdev_major;
 __u32 stx_rdev_minor;
 __u32 stx_dev_major;
 __u32 stx_dev_minor;

 __u64 stx_mnt_id;
 __u32 stx_dio_mem_align;
 __u32 stx_dio_offset_align;

 __u64 stx_subvol;
 __u32 stx_atomic_write_unit_min;
 __u32 stx_atomic_write_unit_max;

 __u32 stx_atomic_write_segments_max;
 __u32 __spare1[1];

 __u64 __spare3[9];

};
# 8 "../include/linux/stat.h" 2
# 22 "../include/linux/stat.h"
struct kstat {
 u32 result_mask;
 umode_t mode;
 unsigned int nlink;
 uint32_t blksize;
 u64 attributes;
 u64 attributes_mask;
# 41 "../include/linux/stat.h"
 u64 ino;
 dev_t dev;
 dev_t rdev;
 kuid_t uid;
 kgid_t gid;
 loff_t size;
 struct timespec64 atime;
 struct timespec64 mtime;
 struct timespec64 ctime;
 struct timespec64 btime;
 u64 blocks;
 u64 mnt_id;
 u32 dio_mem_align;
 u32 dio_offset_align;
 u64 change_cookie;
 u64 subvol;
 u32 atomic_write_unit_min;
 u32 atomic_write_unit_max;
 u32 atomic_write_segments_max;
};
# 23 "../include/linux/sysfs.h" 2


struct kobject;
struct module;
struct bin_attribute;
enum kobj_ns_type;

struct attribute {
 const char *name;
 umode_t mode;

 bool ignore_lockdep:1;
 struct lock_class_key *key;
 struct lock_class_key skey;

};
# 94 "../include/linux/sysfs.h"
struct attribute_group {
 const char *name;
 umode_t (*is_visible)(struct kobject *,
           struct attribute *, int);
 umode_t (*is_bin_visible)(struct kobject *,
        struct bin_attribute *, int);
 struct attribute **attrs;
 struct bin_attribute **bin_attrs;
};
# 289 "../include/linux/sysfs.h"
struct file;
struct vm_area_struct;
struct address_space;

struct bin_attribute {
 struct attribute attr;
 size_t size;
 void *private;
 struct address_space *(*f_mapping)(void);
 ssize_t (*read)(struct file *, struct kobject *, struct bin_attribute *,
   char *, loff_t, size_t);
 ssize_t (*write)(struct file *, struct kobject *, struct bin_attribute *,
    char *, loff_t, size_t);
 loff_t (*llseek)(struct file *, struct kobject *, struct bin_attribute *,
    loff_t, int);
 int (*mmap)(struct file *, struct kobject *, struct bin_attribute *attr,
      struct vm_area_struct *vma);
};
# 385 "../include/linux/sysfs.h"
struct sysfs_ops {
 ssize_t (*show)(struct kobject *, struct attribute *, char *);
 ssize_t (*store)(struct kobject *, struct attribute *, const char *, size_t);
};



int __attribute__((__warn_unused_result__)) sysfs_create_dir_ns(struct kobject *kobj, const void *ns);
void sysfs_remove_dir(struct kobject *kobj);
int __attribute__((__warn_unused_result__)) sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name,
         const void *new_ns);
int __attribute__((__warn_unused_result__)) sysfs_move_dir_ns(struct kobject *kobj,
       struct kobject *new_parent_kobj,
       const void *new_ns);
int __attribute__((__warn_unused_result__)) sysfs_create_mount_point(struct kobject *parent_kobj,
       const char *name);
void sysfs_remove_mount_point(struct kobject *parent_kobj,
         const char *name);

int __attribute__((__warn_unused_result__)) sysfs_create_file_ns(struct kobject *kobj,
          const struct attribute *attr,
          const void *ns);
int __attribute__((__warn_unused_result__)) sysfs_create_files(struct kobject *kobj,
       const struct attribute * const *attr);
int __attribute__((__warn_unused_result__)) sysfs_chmod_file(struct kobject *kobj,
      const struct attribute *attr, umode_t mode);
struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
        const struct attribute *attr);
void sysfs_unbreak_active_protection(struct kernfs_node *kn);
void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr,
     const void *ns);
bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr);
void sysfs_remove_files(struct kobject *kobj, const struct attribute * const *attr);

int __attribute__((__warn_unused_result__)) sysfs_create_bin_file(struct kobject *kobj,
           const struct bin_attribute *attr);
void sysfs_remove_bin_file(struct kobject *kobj,
      const struct bin_attribute *attr);

int __attribute__((__warn_unused_result__)) sysfs_create_link(struct kobject *kobj, struct kobject *target,
       const char *name);
int __attribute__((__warn_unused_result__)) sysfs_create_link_nowarn(struct kobject *kobj,
       struct kobject *target,
       const char *name);
void sysfs_remove_link(struct kobject *kobj, const char *name);

int sysfs_rename_link_ns(struct kobject *kobj, struct kobject *target,
    const char *old_name, const char *new_name,
    const void *new_ns);

void sysfs_delete_link(struct kobject *dir, struct kobject *targ,
   const char *name);

int __attribute__((__warn_unused_result__)) sysfs_create_group(struct kobject *kobj,
        const struct attribute_group *grp);
int __attribute__((__warn_unused_result__)) sysfs_create_groups(struct kobject *kobj,
         const struct attribute_group **groups);
int __attribute__((__warn_unused_result__)) sysfs_update_groups(struct kobject *kobj,
         const struct attribute_group **groups);
int sysfs_update_group(struct kobject *kobj,
         const struct attribute_group *grp);
void sysfs_remove_group(struct kobject *kobj,
   const struct attribute_group *grp);
void sysfs_remove_groups(struct kobject *kobj,
    const struct attribute_group **groups);
int sysfs_add_file_to_group(struct kobject *kobj,
   const struct attribute *attr, const char *group);
void sysfs_remove_file_from_group(struct kobject *kobj,
   const struct attribute *attr, const char *group);
int sysfs_merge_group(struct kobject *kobj,
         const struct attribute_group *grp);
void sysfs_unmerge_group(struct kobject *kobj,
         const struct attribute_group *grp);
int sysfs_add_link_to_group(struct kobject *kobj, const char *group_name,
       struct kobject *target, const char *link_name);
void sysfs_remove_link_from_group(struct kobject *kobj, const char *group_name,
      const char *link_name);
int compat_only_sysfs_link_entry_to_kobj(struct kobject *kobj,
      struct kobject *target_kobj,
      const char *target_name,
      const char *symlink_name);

void sysfs_notify(struct kobject *kobj, const char *dir, const char *attr);

int __attribute__((__warn_unused_result__)) sysfs_init(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sysfs_enable_ns(struct kernfs_node *kn)
{
 return kernfs_enable_ns(kn);
}

int sysfs_file_change_owner(struct kobject *kobj, const char *name, kuid_t kuid,
       kgid_t kgid);
int sysfs_change_owner(struct kobject *kobj, kuid_t kuid, kgid_t kgid);
int sysfs_link_change_owner(struct kobject *kobj, struct kobject *targ,
       const char *name, kuid_t kuid, kgid_t kgid);
int sysfs_groups_change_owner(struct kobject *kobj,
         const struct attribute_group **groups,
         kuid_t kuid, kgid_t kgid);
int sysfs_group_change_owner(struct kobject *kobj,
        const struct attribute_group *groups, kuid_t kuid,
        kgid_t kgid);
__attribute__((__format__(printf, 2, 3)))
int sysfs_emit(char *buf, const char *fmt, ...);
__attribute__((__format__(printf, 3, 4)))
int sysfs_emit_at(char *buf, int at, const char *fmt, ...);

ssize_t sysfs_bin_attr_simple_read(struct file *file, struct kobject *kobj,
       struct bin_attribute *attr, char *buf,
       loff_t off, size_t count);
# 764 "../include/linux/sysfs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) sysfs_create_file(struct kobject *kobj,
       const struct attribute *attr)
{
 return sysfs_create_file_ns(kobj, attr, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sysfs_remove_file(struct kobject *kobj,
         const struct attribute *attr)
{
 sysfs_remove_file_ns(kobj, attr, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sysfs_rename_link(struct kobject *kobj, struct kobject *target,
        const char *old_name, const char *new_name)
{
 return sysfs_rename_link_ns(kobj, target, old_name, new_name, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sysfs_notify_dirent(struct kernfs_node *kn)
{
 kernfs_notify(kn);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kernfs_node *sysfs_get_dirent(struct kernfs_node *parent,
         const char *name)
{
 return kernfs_find_and_get(parent, name);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kernfs_node *sysfs_get(struct kernfs_node *kn)
{
 kernfs_get(kn);
 return kn;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sysfs_put(struct kernfs_node *kn)
{
 kernfs_put(kn);
}
# 21 "../include/linux/kobject.h" 2
# 41 "../include/linux/kobject.h"
extern atomic64_t uevent_seqnum;
# 53 "../include/linux/kobject.h"
enum kobject_action {
 KOBJ_ADD,
 KOBJ_REMOVE,
 KOBJ_CHANGE,
 KOBJ_MOVE,
 KOBJ_ONLINE,
 KOBJ_OFFLINE,
 KOBJ_BIND,
 KOBJ_UNBIND,
};

struct kobject {
 const char *name;
 struct list_head entry;
 struct kobject *parent;
 struct kset *kset;
 const struct kobj_type *ktype;
 struct kernfs_node *sd;
 struct kref kref;

 unsigned int state_initialized:1;
 unsigned int state_in_sysfs:1;
 unsigned int state_add_uevent_sent:1;
 unsigned int state_remove_uevent_sent:1;
 unsigned int uevent_suppress:1;


 struct delayed_work release;

};

__attribute__((__format__(printf, 2, 3))) int kobject_set_name(struct kobject *kobj, const char *name, ...);
__attribute__((__format__(printf, 2, 0))) int kobject_set_name_vargs(struct kobject *kobj, const char *fmt, va_list vargs);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *kobject_name(const struct kobject *kobj)
{
 return kobj->name;
}

void kobject_init(struct kobject *kobj, const struct kobj_type *ktype);
__attribute__((__format__(printf, 3, 4))) __attribute__((__warn_unused_result__)) int kobject_add(struct kobject *kobj,
         struct kobject *parent,
         const char *fmt, ...);
__attribute__((__format__(printf, 4, 5))) __attribute__((__warn_unused_result__)) int kobject_init_and_add(struct kobject *kobj,
           const struct kobj_type *ktype,
           struct kobject *parent,
           const char *fmt, ...);

void kobject_del(struct kobject *kobj);

struct kobject * __attribute__((__warn_unused_result__)) kobject_create_and_add(const char *name, struct kobject *parent);

int __attribute__((__warn_unused_result__)) kobject_rename(struct kobject *, const char *new_name);
int __attribute__((__warn_unused_result__)) kobject_move(struct kobject *, struct kobject *);

struct kobject *kobject_get(struct kobject *kobj);
struct kobject * __attribute__((__warn_unused_result__)) kobject_get_unless_zero(struct kobject *kobj);
void kobject_put(struct kobject *kobj);

const void *kobject_namespace(const struct kobject *kobj);
void kobject_get_ownership(const struct kobject *kobj, kuid_t *uid, kgid_t *gid);
char *kobject_get_path(const struct kobject *kobj, gfp_t flag);

struct kobj_type {
 void (*release)(struct kobject *kobj);
 const struct sysfs_ops *sysfs_ops;
 const struct attribute_group **default_groups;
 const struct kobj_ns_type_operations *(*child_ns_type)(const struct kobject *kobj);
 const void *(*namespace)(const struct kobject *kobj);
 void (*get_ownership)(const struct kobject *kobj, kuid_t *uid, kgid_t *gid);
};

struct kobj_uevent_env {
 char *argv[3];
 char *envp[64];
 int envp_idx;
 char buf[2048];
 int buflen;
};

struct kset_uevent_ops {
 int (* const filter)(const struct kobject *kobj);
 const char *(* const name)(const struct kobject *kobj);
 int (* const uevent)(const struct kobject *kobj, struct kobj_uevent_env *env);
};

struct kobj_attribute {
 struct attribute attr;
 ssize_t (*show)(struct kobject *kobj, struct kobj_attribute *attr,
   char *buf);
 ssize_t (*store)(struct kobject *kobj, struct kobj_attribute *attr,
    const char *buf, size_t count);
};

extern const struct sysfs_ops kobj_sysfs_ops;

struct sock;
# 168 "../include/linux/kobject.h"
struct kset {
 struct list_head list;
 spinlock_t list_lock;
 struct kobject kobj;
 const struct kset_uevent_ops *uevent_ops;
} ;

void kset_init(struct kset *kset);
int __attribute__((__warn_unused_result__)) kset_register(struct kset *kset);
void kset_unregister(struct kset *kset);
struct kset * __attribute__((__warn_unused_result__)) kset_create_and_add(const char *name, const struct kset_uevent_ops *u,
            struct kobject *parent_kobj);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kset *to_kset(struct kobject *kobj)
{
 return kobj ? ({ void *__mptr = (void *)(kobj); _Static_assert(__builtin_types_compatible_p(typeof(*(kobj)), typeof(((struct kset *)0)->kobj)) || __builtin_types_compatible_p(typeof(*(kobj)), typeof(void)), "pointer type mismatch in container_of()"); ((struct kset *)(__mptr - __builtin_offsetof(struct kset, kobj))); }) : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kset *kset_get(struct kset *k)
{
 return k ? to_kset(kobject_get(&k->kobj)) : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kset_put(struct kset *k)
{
 kobject_put(&k->kobj);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct kobj_type *get_ktype(const struct kobject *kobj)
{
 return kobj->ktype;
}

struct kobject *kset_find_obj(struct kset *, const char *);


extern struct kobject *kernel_kobj;

extern struct kobject *mm_kobj;

extern struct kobject *hypervisor_kobj;

extern struct kobject *power_kobj;

extern struct kobject *firmware_kobj;

int kobject_uevent(struct kobject *kobj, enum kobject_action action);
int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
   char *envp[]);
int kobject_synth_uevent(struct kobject *kobj, const char *buf, size_t count);

__attribute__((__format__(printf, 2, 3)))
int add_uevent_var(struct kobj_uevent_env *env, const char *format, ...);
# 6 "../include/linux/cdev.h" 2
# 1 "../include/linux/kdev_t.h" 1




# 1 "../include/uapi/linux/kdev_t.h" 1
# 6 "../include/linux/kdev_t.h" 2
# 24 "../include/linux/kdev_t.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool old_valid_dev(dev_t dev)
{
 return ((unsigned int) ((dev) >> 20)) < 256 && ((unsigned int) ((dev) & ((1U << 20) - 1))) < 256;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u16 old_encode_dev(dev_t dev)
{
 return (((unsigned int) ((dev) >> 20)) << 8) | ((unsigned int) ((dev) & ((1U << 20) - 1)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) dev_t old_decode_dev(u16 val)
{
 return ((((val >> 8) & 255) << 20) | (val & 255));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32 new_encode_dev(dev_t dev)
{
 unsigned major = ((unsigned int) ((dev) >> 20));
 unsigned minor = ((unsigned int) ((dev) & ((1U << 20) - 1)));
 return (minor & 0xff) | (major << 8) | ((minor & ~0xff) << 12);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) dev_t new_decode_dev(u32 dev)
{
 unsigned major = (dev & 0xfff00) >> 8;
 unsigned minor = (dev & 0xff) | ((dev >> 12) & 0xfff00);
 return (((major) << 20) | (minor));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u64 huge_encode_dev(dev_t dev)
{
 return new_encode_dev(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) dev_t huge_decode_dev(u64 dev)
{
 return new_decode_dev(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int sysv_valid_dev(dev_t dev)
{
 return ((unsigned int) ((dev) >> 20)) < (1<<14) && ((unsigned int) ((dev) & ((1U << 20) - 1))) < (1<<18);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32 sysv_encode_dev(dev_t dev)
{
 return ((unsigned int) ((dev) & ((1U << 20) - 1))) | (((unsigned int) ((dev) >> 20)) << 18);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned sysv_major(u32 dev)
{
 return (dev >> 18) & 0x3fff;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned sysv_minor(u32 dev)
{
 return dev & 0x3ffff;
}
# 7 "../include/linux/cdev.h" 2

# 1 "../include/linux/device.h" 1
# 15 "../include/linux/device.h"
# 1 "../include/linux/dev_printk.h" 1
# 16 "../include/linux/dev_printk.h"
# 1 "../include/linux/ratelimit.h" 1








static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ratelimit_state_init(struct ratelimit_state *rs,
     int interval, int burst)
{
 memset(rs, 0, sizeof(*rs));

 do { static struct lock_class_key __key; __raw_spin_lock_init((&rs->lock), "&rs->lock", &__key, LD_WAIT_SPIN); } while (0);
 rs->interval = interval;
 rs->burst = burst;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ratelimit_default_init(struct ratelimit_state *rs)
{
 return ratelimit_state_init(rs, (5 * 300),
     10);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ratelimit_state_exit(struct ratelimit_state *rs)
{
 if (!(rs->flags & ((((1UL))) << (0))))
  return;

 if (rs->missed) {
  ({ do {} while (0); _printk("\001" "4" "%s: %d output lines suppressed due to ratelimiting\n", (__current_thread_info->task)->comm, rs->missed); });

  rs->missed = 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
ratelimit_set_flags(struct ratelimit_state *rs, unsigned long flags)
{
 rs->flags = flags;
}

extern struct ratelimit_state printk_ratelimit_state;
# 17 "../include/linux/dev_printk.h" 2





struct device;




struct dev_printk_info {
 char subsystem[16];
 char device[48];
};
# 60 "../include/linux/dev_printk.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 3, 0)))
int dev_vprintk_emit(int level, const struct device *dev,
       const char *fmt, va_list args)
{ return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 3, 4)))
int dev_printk_emit(int level, const struct device *dev, const char *fmt, ...)
{ return 0; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dev_printk(const char *level, const struct device *dev,
    struct va_format *vaf)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 3, 4)))
void _dev_printk(const char *level, const struct device *dev,
   const char *fmt, ...)
{}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_emerg(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_crit(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_alert(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_err(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_warn(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_notice(const struct device *dev, const char *fmt, ...)
{}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 2, 3)))
void _dev_info(const struct device *dev, const char *fmt, ...)
{}
# 278 "../include/linux/dev_printk.h"
__attribute__((__format__(printf, 3, 4))) int dev_err_probe(const struct device *dev, int err, const char *fmt, ...);
# 16 "../include/linux/device.h" 2
# 1 "../include/linux/energy_model.h" 1




# 1 "../include/linux/device.h" 1
# 6 "../include/linux/energy_model.h" 2




# 1 "../include/linux/sched/cpufreq.h" 1
# 11 "../include/linux/energy_model.h" 2
# 1 "../include/linux/sched/topology.h" 1






# 1 "../include/linux/sched/idle.h" 1






enum cpu_idle_type {
 __CPU_NOT_IDLE = 0,
 CPU_IDLE,
 CPU_NEWLY_IDLE,
 CPU_MAX_IDLE_TYPES
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wake_up_if_idle(int cpu) { }
# 83 "../include/linux/sched/idle.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __current_set_polling(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __current_clr_polling(void) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) current_set_polling_and_test(void)
{
 return __builtin_expect(!!(tif_need_resched()), 0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) current_clr_polling_and_test(void)
{
 return __builtin_expect(!!(tif_need_resched()), 0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void current_clr_polling(void)
{
 __current_clr_polling();







 __asm__ __volatile__("": : :"memory");

 do { if (tif_need_resched()) set_preempt_need_resched(); } while (0);
}
# 8 "../include/linux/sched/topology.h" 2
# 216 "../include/linux/sched/topology.h"
struct sched_domain_attr;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
          struct sched_domain_attr *dattr_new)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
   struct sched_domain_attr *dattr_new)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpus_equal_capacity(int this_cpu, int that_cpu)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpus_share_cache(int this_cpu, int that_cpu)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpus_share_resources(int this_cpu, int that_cpu)
{
 return true;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rebuild_sched_domains_energy(void)
{
}
# 266 "../include/linux/sched/topology.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long arch_scale_cpu_capacity(int cpu)
{
 return (1L << 10);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned long arch_scale_hw_pressure(int cpu)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
void arch_update_hw_pressure(const struct cpumask *cpus,
      unsigned long capped_frequency)
{ }



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
unsigned int arch_scale_freq_ref(int cpu)
{
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_node(const struct task_struct *p)
{
 return ((void)(task_cpu(p)),0);
}
# 12 "../include/linux/energy_model.h" 2
# 24 "../include/linux/energy_model.h"
struct em_perf_state {
 unsigned long performance;
 unsigned long frequency;
 unsigned long power;
 unsigned long cost;
 unsigned long flags;
};
# 48 "../include/linux/energy_model.h"
struct em_perf_table {
 struct callback_head rcu;
 struct kref kref;
 struct em_perf_state state[];
};
# 70 "../include/linux/energy_model.h"
struct em_perf_domain {
 struct em_perf_table *em_table;
 int nr_perf_states;
 unsigned long flags;
 unsigned long cpus[];
};
# 334 "../include/linux/energy_model.h"
struct em_data_callback {};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
    struct em_data_callback *cb, cpumask_t *span,
    bool microwatts)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void em_dev_unregister_perf_domain(struct device *dev)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct em_perf_domain *em_cpu_get(int cpu)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct em_perf_domain *em_pd_get(struct device *dev)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long em_cpu_energy(struct em_perf_domain *pd,
   unsigned long max_util, unsigned long sum_util,
   unsigned long allowed_cpu_cap)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int em_pd_nr_perf_states(struct em_perf_domain *pd)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct em_perf_table *em_table_alloc(struct em_perf_domain *pd)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void em_table_free(struct em_perf_table *table) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int em_dev_update_perf_domain(struct device *dev,
         struct em_perf_table *new_table)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct em_perf_state *em_perf_state_from_pd(struct em_perf_domain *pd)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int em_dev_compute_costs(struct device *dev, struct em_perf_state *table,
    int nr_states)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int em_dev_update_chip_binning(struct device *dev)
{
 return -22;
}
# 17 "../include/linux/device.h" 2
# 1 "../include/linux/ioport.h" 1
# 21 "../include/linux/ioport.h"
struct resource {
 resource_size_t start;
 resource_size_t end;
 const char *name;
 unsigned long flags;
 unsigned long desc;
 struct resource *parent, *sibling, *child;
};
# 135 "../include/linux/ioport.h"
enum {
 IORES_DESC_NONE = 0,
 IORES_DESC_CRASH_KERNEL = 1,
 IORES_DESC_ACPI_TABLES = 2,
 IORES_DESC_ACPI_NV_STORAGE = 3,
 IORES_DESC_PERSISTENT_MEMORY = 4,
 IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
 IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
 IORES_DESC_RESERVED = 7,
 IORES_DESC_SOFT_RESERVED = 8,
 IORES_DESC_CXL = 9,
};




enum {
 IORES_MAP_SYSTEM_RAM = ((((1UL))) << (0)),
 IORES_MAP_ENCRYPTED = ((((1UL))) << (1)),
};
# 203 "../include/linux/ioport.h"
typedef resource_size_t (*resource_alignf)(void *data,
        const struct resource *res,
        resource_size_t size,
        resource_size_t align);
# 221 "../include/linux/ioport.h"
struct resource_constraint {
 resource_size_t min, max, align;
 resource_alignf alignf;
 void *alignf_data;
};


extern struct resource ioport_resource;
extern struct resource iomem_resource;

extern struct resource *request_resource_conflict(struct resource *root, struct resource *new);
extern int request_resource(struct resource *root, struct resource *new);
extern int release_resource(struct resource *new);
void release_child_resources(struct resource *new);
extern void reserve_region_with_split(struct resource *root,
        resource_size_t start, resource_size_t end,
        const char *name);
extern struct resource *insert_resource_conflict(struct resource *parent, struct resource *new);
extern int insert_resource(struct resource *parent, struct resource *new);
extern void insert_resource_expand_to_fit(struct resource *root, struct resource *new);
extern int remove_resource(struct resource *old);
extern void arch_remove_reservations(struct resource *avail);
extern int allocate_resource(struct resource *root, struct resource *new,
        resource_size_t size, resource_size_t min,
        resource_size_t max, resource_size_t align,
        resource_alignf alignf,
        void *alignf_data);
struct resource *lookup_resource(struct resource *root, resource_size_t start);
int adjust_resource(struct resource *res, resource_size_t start,
      resource_size_t size);
resource_size_t resource_alignment(struct resource *res);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) resource_size_t resource_size(const struct resource *res)
{
 return res->end - res->start + 1;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long resource_type(const struct resource *res)
{
 return res->flags & 0x00001f00;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long resource_ext_type(const struct resource *res)
{
 return res->flags & 0x01000000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool resource_contains(const struct resource *r1, const struct resource *r2)
{
 if (resource_type(r1) != resource_type(r2))
  return false;
 if (r1->flags & 0x20000000 || r2->flags & 0x20000000)
  return false;
 return r1->start <= r2->start && r1->end >= r2->end;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool resource_overlaps(const struct resource *r1, const struct resource *r2)
{
       return r1->start <= r2->end && r1->end >= r2->start;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool resource_intersection(const struct resource *r1, const struct resource *r2,
      struct resource *r)
{
 if (!resource_overlaps(r1, r2))
  return false;
 r->start = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((r1->start) - (r2->start)) * 0l)) : (int *)8))), ((r1->start) > (r2->start) ? (r1->start) : (r2->start)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->start))(-1)) < ( typeof(r1->start))1)) * 0l)) : (int *)8))), (((typeof(r1->start))(-1)) < ( typeof(r1->start))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->start))(-1)) < ( typeof(r2->start))1)) * 0l)) : (int *)8))), (((typeof(r2->start))(-1)) < ( typeof(r2->start))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r1->start) + 0))(-1)) < ( typeof((r1->start) + 0))1)) * 0l)) : (int *)8))), (((typeof((r1->start) + 0))(-1)) < ( typeof((r1->start) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r2->start) + 0))(-1)) < ( typeof((r2->start) + 0))1)) * 0l)) : (int *)8))), (((typeof((r2->start) + 0))(-1)) < ( typeof((r2->start) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r1->start) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->start))(-1)) < ( typeof(r1->start))1)) * 0l)) : (int *)8))), (((typeof(r1->start))(-1)) < ( typeof(r1->start))1), 0), r1->start, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r2->start) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->start))(-1)) < ( typeof(r2->start))1)) * 0l)) : (int *)8))), (((typeof(r2->start))(-1)) < ( typeof(r2->start))1), 0), r2->start, -1) >= 0)), "max" "(" "r1->start" ", " "r2->start" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_143 = (r1->start); __auto_type __UNIQUE_ID_y_144 = (r2->start); ((__UNIQUE_ID_x_143) > (__UNIQUE_ID_y_144) ? (__UNIQUE_ID_x_143) : (__UNIQUE_ID_y_144)); }); }));
 r->end = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((r1->end) - (r2->end)) * 0l)) : (int *)8))), ((r1->end) < (r2->end) ? (r1->end) : (r2->end)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->end))(-1)) < ( typeof(r1->end))1)) * 0l)) : (int *)8))), (((typeof(r1->end))(-1)) < ( typeof(r1->end))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->end))(-1)) < ( typeof(r2->end))1)) * 0l)) : (int *)8))), (((typeof(r2->end))(-1)) < ( typeof(r2->end))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r1->end) + 0))(-1)) < ( typeof((r1->end) + 0))1)) * 0l)) : (int *)8))), (((typeof((r1->end) + 0))(-1)) < ( typeof((r1->end) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r2->end) + 0))(-1)) < ( typeof((r2->end) + 0))1)) * 0l)) : (int *)8))), (((typeof((r2->end) + 0))(-1)) < ( typeof((r2->end) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r1->end) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->end))(-1)) < ( typeof(r1->end))1)) * 0l)) : (int *)8))), (((typeof(r1->end))(-1)) < ( typeof(r1->end))1), 0), r1->end, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r2->end) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->end))(-1)) < ( typeof(r2->end))1)) * 0l)) : (int *)8))), (((typeof(r2->end))(-1)) < ( typeof(r2->end))1), 0), r2->end, -1) >= 0)), "min" "(" "r1->end" ", " "r2->end" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_145 = (r1->end); __auto_type __UNIQUE_ID_y_146 = (r2->end); ((__UNIQUE_ID_x_145) < (__UNIQUE_ID_y_146) ? (__UNIQUE_ID_x_145) : (__UNIQUE_ID_y_146)); }); }));
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool resource_union(const struct resource *r1, const struct resource *r2,
      struct resource *r)
{
 if (!resource_overlaps(r1, r2))
  return false;
 r->start = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((r1->start) - (r2->start)) * 0l)) : (int *)8))), ((r1->start) < (r2->start) ? (r1->start) : (r2->start)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->start))(-1)) < ( typeof(r1->start))1)) * 0l)) : (int *)8))), (((typeof(r1->start))(-1)) < ( typeof(r1->start))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->start))(-1)) < ( typeof(r2->start))1)) * 0l)) : (int *)8))), (((typeof(r2->start))(-1)) < ( typeof(r2->start))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r1->start) + 0))(-1)) < ( typeof((r1->start) + 0))1)) * 0l)) : (int *)8))), (((typeof((r1->start) + 0))(-1)) < ( typeof((r1->start) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r2->start) + 0))(-1)) < ( typeof((r2->start) + 0))1)) * 0l)) : (int *)8))), (((typeof((r2->start) + 0))(-1)) < ( typeof((r2->start) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r1->start) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->start))(-1)) < ( typeof(r1->start))1)) * 0l)) : (int *)8))), (((typeof(r1->start))(-1)) < ( typeof(r1->start))1), 0), r1->start, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r2->start) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->start))(-1)) < ( typeof(r2->start))1)) * 0l)) : (int *)8))), (((typeof(r2->start))(-1)) < ( typeof(r2->start))1), 0), r2->start, -1) >= 0)), "min" "(" "r1->start" ", " "r2->start" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_147 = (r1->start); __auto_type __UNIQUE_ID_y_148 = (r2->start); ((__UNIQUE_ID_x_147) < (__UNIQUE_ID_y_148) ? (__UNIQUE_ID_x_147) : (__UNIQUE_ID_y_148)); }); }));
 r->end = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((r1->end) - (r2->end)) * 0l)) : (int *)8))), ((r1->end) > (r2->end) ? (r1->end) : (r2->end)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->end))(-1)) < ( typeof(r1->end))1)) * 0l)) : (int *)8))), (((typeof(r1->end))(-1)) < ( typeof(r1->end))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->end))(-1)) < ( typeof(r2->end))1)) * 0l)) : (int *)8))), (((typeof(r2->end))(-1)) < ( typeof(r2->end))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r1->end) + 0))(-1)) < ( typeof((r1->end) + 0))1)) * 0l)) : (int *)8))), (((typeof((r1->end) + 0))(-1)) < ( typeof((r1->end) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((r2->end) + 0))(-1)) < ( typeof((r2->end) + 0))1)) * 0l)) : (int *)8))), (((typeof((r2->end) + 0))(-1)) < ( typeof((r2->end) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r1->end) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r1->end))(-1)) < ( typeof(r1->end))1)) * 0l)) : (int *)8))), (((typeof(r1->end))(-1)) < ( typeof(r1->end))1), 0), r1->end, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(r2->end) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(r2->end))(-1)) < ( typeof(r2->end))1)) * 0l)) : (int *)8))), (((typeof(r2->end))(-1)) < ( typeof(r2->end))1), 0), r2->end, -1) >= 0)), "max" "(" "r1->end" ", " "r2->end" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_149 = (r1->end); __auto_type __UNIQUE_ID_y_150 = (r2->end); ((__UNIQUE_ID_x_149) > (__UNIQUE_ID_y_150) ? (__UNIQUE_ID_x_149) : (__UNIQUE_ID_y_150)); }); }));
 return true;
}

int find_resource_space(struct resource *root, struct resource *new,
   resource_size_t size, struct resource_constraint *constraint);
# 314 "../include/linux/ioport.h"
extern struct resource * __request_region(struct resource *,
     resource_size_t start,
     resource_size_t n,
     const char *name, int flags);





extern void __release_region(struct resource *, resource_size_t,
    resource_size_t);
# 333 "../include/linux/ioport.h"
struct device;

extern int devm_request_resource(struct device *dev, struct resource *root,
     struct resource *new);
extern void devm_release_resource(struct device *dev, struct resource *new);






extern struct resource * __devm_request_region(struct device *dev,
    struct resource *parent, resource_size_t start,
    resource_size_t n, const char *name);






extern void __devm_release_region(struct device *dev, struct resource *parent,
      resource_size_t start, resource_size_t n);
extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size);
extern bool iomem_is_exclusive(u64 addr);
extern bool resource_is_exclusive(struct resource *resource, u64 addr,
      resource_size_t size);

extern int
walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
  void *arg, int (*func)(unsigned long, unsigned long, void *));
extern int
walk_mem_res(u64 start, u64 end, void *arg,
      int (*func)(struct resource *, void *));
extern int
walk_system_ram_res(u64 start, u64 end, void *arg,
      int (*func)(struct resource *, void *));
extern int
walk_system_ram_res_rev(u64 start, u64 end, void *arg,
   int (*func)(struct resource *, void *));
extern int
walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start, u64 end,
      void *arg, int (*func)(struct resource *, void *));

struct resource *devm_request_free_mem_region(struct device *dev,
  struct resource *base, unsigned long size);
struct resource *request_free_mem_region(struct resource *base,
  unsigned long size, const char *name);
struct resource *alloc_free_mem_region(struct resource *base,
  unsigned long size, unsigned long align, const char *name);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqresource_disabled(struct resource *res, u32 irq)
{
 res->start = irq;
 res->end = irq;
 res->flags |= 0x00000400 | 0x10000000 | 0x20000000;
}

extern struct address_space *iomem_get_mapping(void);
# 18 "../include/linux/device.h" 2

# 1 "../include/linux/klist.h" 1
# 17 "../include/linux/klist.h"
struct klist_node;
struct klist {
 spinlock_t k_lock;
 struct list_head k_list;
 void (*get)(struct klist_node *);
 void (*put)(struct klist_node *);
} __attribute__ ((aligned (sizeof(void *))));
# 34 "../include/linux/klist.h"
extern void klist_init(struct klist *k, void (*get)(struct klist_node *),
         void (*put)(struct klist_node *));

struct klist_node {
 void *n_klist;
 struct list_head n_node;
 struct kref n_ref;
};

extern void klist_add_tail(struct klist_node *n, struct klist *k);
extern void klist_add_head(struct klist_node *n, struct klist *k);
extern void klist_add_behind(struct klist_node *n, struct klist_node *pos);
extern void klist_add_before(struct klist_node *n, struct klist_node *pos);

extern void klist_del(struct klist_node *n);
extern void klist_remove(struct klist_node *n);

extern int klist_node_attached(struct klist_node *n);


struct klist_iter {
 struct klist *i_klist;
 struct klist_node *i_cur;
};


extern void klist_iter_init(struct klist *k, struct klist_iter *i);
extern void klist_iter_init_node(struct klist *k, struct klist_iter *i,
     struct klist_node *n);
extern void klist_iter_exit(struct klist_iter *i);
extern struct klist_node *klist_prev(struct klist_iter *i);
extern struct klist_node *klist_next(struct klist_iter *i);
# 20 "../include/linux/device.h" 2





# 1 "../include/linux/pm.h" 1
# 17 "../include/linux/pm.h"
# 1 "../include/linux/hrtimer.h" 1
# 15 "../include/linux/hrtimer.h"
# 1 "../include/linux/hrtimer_defs.h" 1





# 1 "../include/linux/timerqueue.h" 1







extern bool timerqueue_add(struct timerqueue_head *head,
      struct timerqueue_node *node);
extern bool timerqueue_del(struct timerqueue_head *head,
      struct timerqueue_node *node);
extern struct timerqueue_node *timerqueue_iterate_next(
      struct timerqueue_node *node);
# 22 "../include/linux/timerqueue.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
{
 struct rb_node *leftmost = (&head->rb_root)->rb_leftmost;

 return ({ typeof(leftmost) ____ptr = (leftmost); ____ptr ? ({ void *__mptr = (void *)(____ptr); _Static_assert(__builtin_types_compatible_p(typeof(*(____ptr)), typeof(((struct timerqueue_node *)0)->node)) || __builtin_types_compatible_p(typeof(*(____ptr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct timerqueue_node *)(__mptr - __builtin_offsetof(struct timerqueue_node, node))); }) : ((void *)0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void timerqueue_init(struct timerqueue_node *node)
{
 ((&node->node)->__rb_parent_color = (unsigned long)(&node->node));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool timerqueue_node_queued(struct timerqueue_node *node)
{
 return !((&node->node)->__rb_parent_color == (unsigned long)(&node->node));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void timerqueue_init_head(struct timerqueue_head *head)
{
 head->rb_root = (struct rb_root_cached) { {((void *)0), }, ((void *)0) };
}
# 7 "../include/linux/hrtimer_defs.h" 2
# 47 "../include/linux/hrtimer_defs.h"
struct hrtimer_clock_base {
 struct hrtimer_cpu_base *cpu_base;
 unsigned int index;
 clockid_t clockid;
 seqcount_raw_spinlock_t seq;
 struct hrtimer *running;
 struct timerqueue_head active;
 ktime_t (*get_time)(void);
 ktime_t offset;
} ;

enum hrtimer_base_type {
 HRTIMER_BASE_MONOTONIC,
 HRTIMER_BASE_REALTIME,
 HRTIMER_BASE_BOOTTIME,
 HRTIMER_BASE_TAI,
 HRTIMER_BASE_MONOTONIC_SOFT,
 HRTIMER_BASE_REALTIME_SOFT,
 HRTIMER_BASE_BOOTTIME_SOFT,
 HRTIMER_BASE_TAI_SOFT,
 HRTIMER_MAX_CLOCK_BASES,
};
# 103 "../include/linux/hrtimer_defs.h"
struct hrtimer_cpu_base {
 raw_spinlock_t lock;
 unsigned int cpu;
 unsigned int active_bases;
 unsigned int clock_was_set_seq;
 unsigned int hres_active : 1,
     in_hrtirq : 1,
     hang_detected : 1,
     softirq_activated : 1,
     online : 1;

 unsigned int nr_events;
 unsigned short nr_retries;
 unsigned short nr_hangs;
 unsigned int max_hang_time;





 ktime_t expires_next;
 struct hrtimer *next_timer;
 ktime_t softirq_expires_next;
 struct hrtimer *softirq_next_timer;
 struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
} __attribute__((__aligned__((1 << (5)))));
# 16 "../include/linux/hrtimer.h" 2
# 35 "../include/linux/hrtimer.h"
enum hrtimer_mode {
 HRTIMER_MODE_ABS = 0x00,
 HRTIMER_MODE_REL = 0x01,
 HRTIMER_MODE_PINNED = 0x02,
 HRTIMER_MODE_SOFT = 0x04,
 HRTIMER_MODE_HARD = 0x08,

 HRTIMER_MODE_ABS_PINNED = HRTIMER_MODE_ABS | HRTIMER_MODE_PINNED,
 HRTIMER_MODE_REL_PINNED = HRTIMER_MODE_REL | HRTIMER_MODE_PINNED,

 HRTIMER_MODE_ABS_SOFT = HRTIMER_MODE_ABS | HRTIMER_MODE_SOFT,
 HRTIMER_MODE_REL_SOFT = HRTIMER_MODE_REL | HRTIMER_MODE_SOFT,

 HRTIMER_MODE_ABS_PINNED_SOFT = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_SOFT,
 HRTIMER_MODE_REL_PINNED_SOFT = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_SOFT,

 HRTIMER_MODE_ABS_HARD = HRTIMER_MODE_ABS | HRTIMER_MODE_HARD,
 HRTIMER_MODE_REL_HARD = HRTIMER_MODE_REL | HRTIMER_MODE_HARD,

 HRTIMER_MODE_ABS_PINNED_HARD = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_HARD,
 HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD,
};
# 92 "../include/linux/hrtimer.h"
struct hrtimer_sleeper {
 struct hrtimer timer;
 struct task_struct *task;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_set_expires(struct hrtimer *timer, ktime_t time)
{
 timer->node.expires = time;
 timer->_softexpires = time;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_set_expires_range(struct hrtimer *timer, ktime_t time, ktime_t delta)
{
 timer->_softexpires = time;
 timer->node.expires = ktime_add_safe(time, delta);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_set_expires_range_ns(struct hrtimer *timer, ktime_t time, u64 delta)
{
 timer->_softexpires = time;
 timer->node.expires = ktime_add_safe(time, ns_to_ktime(delta));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_set_expires_tv64(struct hrtimer *timer, s64 tv64)
{
 timer->node.expires = tv64;
 timer->_softexpires = tv64;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_add_expires(struct hrtimer *timer, ktime_t time)
{
 timer->node.expires = ktime_add_safe(timer->node.expires, time);
 timer->_softexpires = ktime_add_safe(timer->_softexpires, time);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_add_expires_ns(struct hrtimer *timer, u64 ns)
{
 timer->node.expires = ((timer->node.expires) + (ns));
 timer->_softexpires = ((timer->_softexpires) + (ns));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t hrtimer_get_expires(const struct hrtimer *timer)
{
 return timer->node.expires;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t hrtimer_get_softexpires(const struct hrtimer *timer)
{
 return timer->_softexpires;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 hrtimer_get_expires_tv64(const struct hrtimer *timer)
{
 return timer->node.expires;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 hrtimer_get_softexpires_tv64(const struct hrtimer *timer)
{
 return timer->_softexpires;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 hrtimer_get_expires_ns(const struct hrtimer *timer)
{
 return ktime_to_ns(timer->node.expires);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t hrtimer_expires_remaining(const struct hrtimer *timer)
{
 return ((timer->node.expires) - (timer->base->get_time()));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t hrtimer_cb_get_time(struct hrtimer *timer)
{
 return timer->base->get_time();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hrtimer_is_hres_active(struct hrtimer *timer)
{
 return 1 ?
  timer->base->cpu_base->hres_active : 0;
}


struct clock_event_device;

extern void hrtimer_interrupt(struct clock_event_device *dev);

extern unsigned int hrtimer_resolution;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t
__hrtimer_expires_remaining_adjusted(const struct hrtimer *timer, ktime_t now)
{
 ktime_t rem = ((timer->node.expires) - (now));





 if (0 && timer->is_rel)
  rem -= hrtimer_resolution;
 return rem;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t
hrtimer_expires_remaining_adjusted(const struct hrtimer *timer)
{
 return __hrtimer_expires_remaining_adjusted(timer,
          timer->base->get_time());
}


extern void timerfd_clock_was_set(void);
extern void timerfd_resume(void);





extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_tick_cpu_device; extern __attribute__((section(".data" ""))) __typeof__(struct tick_device) tick_cpu_device;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_cancel_wait_running(struct hrtimer *timer)
{
 __vmyield();
}





extern void hrtimer_init(struct hrtimer *timer, clockid_t which_clock,
    enum hrtimer_mode mode);
extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
     enum hrtimer_mode mode);


extern void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t which_clock,
      enum hrtimer_mode mode);
extern void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
       clockid_t clock_id,
       enum hrtimer_mode mode);

extern void destroy_hrtimer_on_stack(struct hrtimer *timer);
# 261 "../include/linux/hrtimer.h"
extern void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
       u64 range_ns, const enum hrtimer_mode mode);
# 272 "../include/linux/hrtimer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_start(struct hrtimer *timer, ktime_t tim,
     const enum hrtimer_mode mode)
{
 hrtimer_start_range_ns(timer, tim, 0, mode);
}

extern int hrtimer_cancel(struct hrtimer *timer);
extern int hrtimer_try_to_cancel(struct hrtimer *timer);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_start_expires(struct hrtimer *timer,
      enum hrtimer_mode mode)
{
 u64 delta;
 ktime_t soft, hard;
 soft = hrtimer_get_softexpires(timer);
 hard = hrtimer_get_expires(timer);
 delta = ktime_to_ns(((hard) - (soft)));
 hrtimer_start_range_ns(timer, soft, delta, mode);
}

void hrtimer_sleeper_start_expires(struct hrtimer_sleeper *sl,
       enum hrtimer_mode mode);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hrtimer_restart(struct hrtimer *timer)
{
 hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
}


extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t hrtimer_get_remaining(const struct hrtimer *timer)
{
 return __hrtimer_get_remaining(timer, false);
}

extern u64 hrtimer_get_next_event(void);
extern u64 hrtimer_next_event_without(const struct hrtimer *exclude);

extern bool hrtimer_active(const struct hrtimer *timer);
# 325 "../include/linux/hrtimer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hrtimer_is_queued(struct hrtimer *timer)
{

 return !!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_151(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(timer->state) == sizeof(char) || sizeof(timer->state) == sizeof(short) || sizeof(timer->state) == sizeof(int) || sizeof(timer->state) == sizeof(long)) || sizeof(timer->state) == sizeof(long long))) __compiletime_assert_151(); } while (0); (*(const volatile typeof( _Generic((timer->state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (timer->state))) *)&(timer->state)); }) & 0x01);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hrtimer_callback_running(struct hrtimer *timer)
{
 return timer->base->running == timer;
}


extern u64
hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval);
# 352 "../include/linux/hrtimer.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 hrtimer_forward_now(struct hrtimer *timer,
          ktime_t interval)
{
 return hrtimer_forward(timer, timer->base->get_time(), interval);
}



extern int nanosleep_copyout(struct restart_block *, struct timespec64 *);
extern long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
         const clockid_t clockid);

extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta,
        const enum hrtimer_mode mode);
extern int schedule_hrtimeout_range_clock(ktime_t *expires,
       u64 delta,
       const enum hrtimer_mode mode,
       clockid_t clock_id);
extern int schedule_hrtimeout(ktime_t *expires, const enum hrtimer_mode mode);


extern void hrtimer_run_queues(void);


extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) hrtimers_init(void);


extern void sysrq_timer_list_show(void);

int hrtimers_prepare_cpu(unsigned int cpu);
# 18 "../include/linux/pm.h" 2





extern void (*pm_power_off)(void);

struct device;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_vt_switch_required(struct device *dev, bool required)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_vt_switch_unregister(struct device *dev)
{
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cxl_mem_active(void)
{
 return false;
}
# 58 "../include/linux/pm.h"
typedef struct pm_message {
 int event;
} pm_message_t;
# 286 "../include/linux/pm.h"
struct dev_pm_ops {
 int (*prepare)(struct device *dev);
 void (*complete)(struct device *dev);
 int (*suspend)(struct device *dev);
 int (*resume)(struct device *dev);
 int (*freeze)(struct device *dev);
 int (*thaw)(struct device *dev);
 int (*poweroff)(struct device *dev);
 int (*restore)(struct device *dev);
 int (*suspend_late)(struct device *dev);
 int (*resume_early)(struct device *dev);
 int (*freeze_late)(struct device *dev);
 int (*thaw_early)(struct device *dev);
 int (*poweroff_late)(struct device *dev);
 int (*restore_early)(struct device *dev);
 int (*suspend_noirq)(struct device *dev);
 int (*resume_noirq)(struct device *dev);
 int (*freeze_noirq)(struct device *dev);
 int (*thaw_noirq)(struct device *dev);
 int (*poweroff_noirq)(struct device *dev);
 int (*restore_noirq)(struct device *dev);
 int (*runtime_suspend)(struct device *dev);
 int (*runtime_resume)(struct device *dev);
 int (*runtime_idle)(struct device *dev);
};
# 597 "../include/linux/pm.h"
enum rpm_status {
 RPM_INVALID = -1,
 RPM_ACTIVE = 0,
 RPM_RESUMING,
 RPM_SUSPENDED,
 RPM_SUSPENDING,
};
# 620 "../include/linux/pm.h"
enum rpm_request {
 RPM_REQ_NONE = 0,
 RPM_REQ_IDLE,
 RPM_REQ_SUSPEND,
 RPM_REQ_AUTOSUSPEND,
 RPM_REQ_RESUME,
};

struct wakeup_source;
struct wake_irq;
struct pm_domain_data;

struct pm_subsys_data {
 spinlock_t lock;
 unsigned int refcount;
# 643 "../include/linux/pm.h"
};
# 663 "../include/linux/pm.h"
struct dev_pm_info {
 pm_message_t power_state;
 bool can_wakeup:1;
 bool async_suspend:1;
 bool in_dpm_list:1;
 bool is_prepared:1;
 bool is_suspended:1;
 bool is_noirq_suspended:1;
 bool is_late_suspended:1;
 bool no_pm:1;
 bool early_init:1;
 bool direct_complete:1;
 u32 driver_flags;
 spinlock_t lock;
# 688 "../include/linux/pm.h"
 bool should_wakeup:1;
# 721 "../include/linux/pm.h"
 struct pm_subsys_data *subsys_data;
 void (*set_latency_tolerance)(struct device *, s32);
 struct dev_pm_qos *qos;
};

extern int dev_pm_get_subsys_data(struct device *dev);
extern void dev_pm_put_subsys_data(struct device *dev);
# 744 "../include/linux/pm.h"
struct dev_pm_domain {
 struct dev_pm_ops ops;
 int (*start)(struct device *dev);
 void (*detach)(struct device *dev, bool power_off);
 int (*activate)(struct device *dev);
 void (*sync)(struct device *dev);
 void (*dismiss)(struct device *dev);
 int (*set_performance_state)(struct device *dev, unsigned int state);
};
# 864 "../include/linux/pm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dpm_suspend_start(pm_message_t state)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_pm_wait_for_dev(struct device *a, struct device *b)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dpm_for_each_dev(void *data, void (*fn)(struct device *, void *))
{
}
# 903 "../include/linux/pm.h"
enum dpm_order {
 DPM_ORDER_NONE,
 DPM_ORDER_DEV_AFTER_PARENT,
 DPM_ORDER_PARENT_BEFORE_DEV,
 DPM_ORDER_DEV_LAST,
};
# 26 "../include/linux/device.h" 2




# 1 "../include/linux/device/bus.h" 1
# 21 "../include/linux/device/bus.h"
struct device_driver;
struct fwnode_handle;
# 77 "../include/linux/device/bus.h"
struct bus_type {
 const char *name;
 const char *dev_name;
 const struct attribute_group **bus_groups;
 const struct attribute_group **dev_groups;
 const struct attribute_group **drv_groups;

 int (*match)(struct device *dev, const struct device_driver *drv);
 int (*uevent)(const struct device *dev, struct kobj_uevent_env *env);
 int (*probe)(struct device *dev);
 void (*sync_state)(struct device *dev);
 void (*remove)(struct device *dev);
 void (*shutdown)(struct device *dev);

 int (*online)(struct device *dev);
 int (*offline)(struct device *dev);

 int (*suspend)(struct device *dev, pm_message_t state);
 int (*resume)(struct device *dev);

 int (*num_vf)(struct device *dev);

 int (*dma_configure)(struct device *dev);
 void (*dma_cleanup)(struct device *dev);

 const struct dev_pm_ops *pm;

 bool need_parent_lock;
};

int __attribute__((__warn_unused_result__)) bus_register(const struct bus_type *bus);

void bus_unregister(const struct bus_type *bus);

int __attribute__((__warn_unused_result__)) bus_rescan_devices(const struct bus_type *bus);

struct bus_attribute {
 struct attribute attr;
 ssize_t (*show)(const struct bus_type *bus, char *buf);
 ssize_t (*store)(const struct bus_type *bus, const char *buf, size_t count);
};
# 126 "../include/linux/device/bus.h"
int __attribute__((__warn_unused_result__)) bus_create_file(const struct bus_type *bus, struct bus_attribute *attr);
void bus_remove_file(const struct bus_type *bus, struct bus_attribute *attr);


int device_match_name(struct device *dev, const void *name);
int device_match_of_node(struct device *dev, const void *np);
int device_match_fwnode(struct device *dev, const void *fwnode);
int device_match_devt(struct device *dev, const void *pdevt);
int device_match_acpi_dev(struct device *dev, const void *adev);
int device_match_acpi_handle(struct device *dev, const void *handle);
int device_match_any(struct device *dev, const void *unused);


int bus_for_each_dev(const struct bus_type *bus, struct device *start, void *data,
       int (*fn)(struct device *dev, void *data));
struct device *bus_find_device(const struct bus_type *bus, struct device *start,
          const void *data,
          int (*match)(struct device *dev, const void *data));







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *bus_find_device_by_name(const struct bus_type *bus,
           struct device *start,
           const char *name)
{
 return bus_find_device(bus, start, name, device_match_name);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
bus_find_device_by_of_node(const struct bus_type *bus, const struct device_node *np)
{
 return bus_find_device(bus, ((void *)0), np, device_match_of_node);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
bus_find_device_by_fwnode(const struct bus_type *bus, const struct fwnode_handle *fwnode)
{
 return bus_find_device(bus, ((void *)0), fwnode, device_match_fwnode);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *bus_find_device_by_devt(const struct bus_type *bus,
           dev_t devt)
{
 return bus_find_device(bus, ((void *)0), &devt, device_match_devt);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
bus_find_next_device(const struct bus_type *bus,struct device *cur)
{
 return bus_find_device(bus, cur, ((void *)0), device_match_any);
}
# 221 "../include/linux/device/bus.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
bus_find_device_by_acpi_dev(const struct bus_type *bus, const void *adev)
{
 return ((void *)0);
}


int bus_for_each_drv(const struct bus_type *bus, struct device_driver *start,
       void *data, int (*fn)(struct device_driver *, void *));
void bus_sort_breadthfirst(const struct bus_type *bus,
      int (*compare)(const struct device *a,
       const struct device *b));






struct notifier_block;

int bus_register_notifier(const struct bus_type *bus, struct notifier_block *nb);
int bus_unregister_notifier(const struct bus_type *bus, struct notifier_block *nb);
# 263 "../include/linux/device/bus.h"
enum bus_notifier_event {
 BUS_NOTIFY_ADD_DEVICE,
 BUS_NOTIFY_DEL_DEVICE,
 BUS_NOTIFY_REMOVED_DEVICE,
 BUS_NOTIFY_BIND_DRIVER,
 BUS_NOTIFY_BOUND_DRIVER,
 BUS_NOTIFY_UNBIND_DRIVER,
 BUS_NOTIFY_UNBOUND_DRIVER,
 BUS_NOTIFY_DRIVER_NOT_BOUND,
};

struct kset *bus_get_kset(const struct bus_type *bus);
struct device *bus_get_dev_root(const struct bus_type *bus);
# 31 "../include/linux/device.h" 2
# 1 "../include/linux/device/class.h" 1
# 22 "../include/linux/device/class.h"
struct device;
struct fwnode_handle;
# 50 "../include/linux/device/class.h"
struct class {
 const char *name;

 const struct attribute_group **class_groups;
 const struct attribute_group **dev_groups;

 int (*dev_uevent)(const struct device *dev, struct kobj_uevent_env *env);
 char *(*devnode)(const struct device *dev, umode_t *mode);

 void (*class_release)(const struct class *class);
 void (*dev_release)(struct device *dev);

 int (*shutdown_pre)(struct device *dev);

 const struct kobj_ns_type_operations *ns_type;
 const void *(*namespace)(const struct device *dev);

 void (*get_ownership)(const struct device *dev, kuid_t *uid, kgid_t *gid);

 const struct dev_pm_ops *pm;
};

struct class_dev_iter {
 struct klist_iter ki;
 const struct device_type *type;
 struct subsys_private *sp;
};

int __attribute__((__warn_unused_result__)) class_register(const struct class *class);
void class_unregister(const struct class *class);
bool class_is_registered(const struct class *class);

struct class_compat;
struct class_compat *class_compat_register(const char *name);
void class_compat_unregister(struct class_compat *cls);
int class_compat_create_link(struct class_compat *cls, struct device *dev,
        struct device *device_link);
void class_compat_remove_link(struct class_compat *cls, struct device *dev,
         struct device *device_link);

void class_dev_iter_init(struct class_dev_iter *iter, const struct class *class,
    const struct device *start, const struct device_type *type);
struct device *class_dev_iter_next(struct class_dev_iter *iter);
void class_dev_iter_exit(struct class_dev_iter *iter);

int class_for_each_device(const struct class *class, const struct device *start, void *data,
     int (*fn)(struct device *dev, void *data));
struct device *class_find_device(const struct class *class, const struct device *start,
     const void *data, int (*match)(struct device *, const void *));







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *class_find_device_by_name(const struct class *class,
             const char *name)
{
 return class_find_device(class, ((void *)0), name, device_match_name);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *class_find_device_by_of_node(const struct class *class,
         const struct device_node *np)
{
 return class_find_device(class, ((void *)0), np, device_match_of_node);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *class_find_device_by_fwnode(const struct class *class,
        const struct fwnode_handle *fwnode)
{
 return class_find_device(class, ((void *)0), fwnode, device_match_fwnode);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *class_find_device_by_devt(const struct class *class,
             dev_t devt)
{
 return class_find_device(class, ((void *)0), &devt, device_match_devt);
}
# 162 "../include/linux/device/class.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *class_find_device_by_acpi_dev(const struct class *class,
          const void *adev)
{
 return ((void *)0);
}


struct class_attribute {
 struct attribute attr;
 ssize_t (*show)(const struct class *class, const struct class_attribute *attr,
   char *buf);
 ssize_t (*store)(const struct class *class, const struct class_attribute *attr,
    const char *buf, size_t count);
};
# 184 "../include/linux/device/class.h"
int __attribute__((__warn_unused_result__)) class_create_file_ns(const struct class *class, const struct class_attribute *attr,
          const void *ns);
void class_remove_file_ns(const struct class *class, const struct class_attribute *attr,
     const void *ns);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) class_create_file(const struct class *class,
       const struct class_attribute *attr)
{
 return class_create_file_ns(class, attr, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_remove_file(const struct class *class,
         const struct class_attribute *attr)
{
 return class_remove_file_ns(class, attr, ((void *)0));
}


struct class_attribute_string {
 struct class_attribute attr;
 char *str;
};
# 214 "../include/linux/device/class.h"
ssize_t show_class_attr_string(const struct class *class, const struct class_attribute *attr,
          char *buf);

struct class_interface {
 struct list_head node;
 const struct class *class;

 int (*add_dev) (struct device *dev);
 void (*remove_dev) (struct device *dev);
};

int __attribute__((__warn_unused_result__)) class_interface_register(struct class_interface *);
void class_interface_unregister(struct class_interface *);

struct class * __attribute__((__warn_unused_result__)) class_create(const char *name);
void class_destroy(const struct class *cls);
# 32 "../include/linux/device.h" 2
# 1 "../include/linux/device/driver.h" 1
# 21 "../include/linux/device/driver.h"
# 1 "../include/linux/module.h" 1
# 14 "../include/linux/module.h"
# 1 "../include/linux/buildid.h" 1








struct vm_area_struct;
int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
     __u32 *size);
int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_vmlinux_build_id(void) { }
# 15 "../include/linux/module.h" 2


# 1 "../include/linux/kmod.h" 1








# 1 "../include/linux/umh.h" 1








# 1 "../include/linux/sysctl.h" 1
# 30 "../include/linux/sysctl.h"
# 1 "../include/uapi/linux/sysctl.h" 1
# 35 "../include/uapi/linux/sysctl.h"
struct __sysctl_args {
 int *name;
 int nlen;
 void *oldval;
 size_t *oldlenp;
 void *newval;
 size_t newlen;
 unsigned long __unused[4];
};





enum
{
 CTL_KERN=1,
 CTL_VM=2,
 CTL_NET=3,
 CTL_PROC=4,
 CTL_FS=5,
 CTL_DEBUG=6,
 CTL_DEV=7,
 CTL_BUS=8,
 CTL_ABI=9,
 CTL_CPU=10,
 CTL_ARLAN=254,
 CTL_S390DBF=5677,
 CTL_SUNRPC=7249,
 CTL_PM=9899,
 CTL_FRV=9898,
};


enum
{
 CTL_BUS_ISA=1
};


enum
{
 INOTIFY_MAX_USER_INSTANCES=1,
 INOTIFY_MAX_USER_WATCHES=2,
 INOTIFY_MAX_QUEUED_EVENTS=3
};


enum
{
 KERN_OSTYPE=1,
 KERN_OSRELEASE=2,
 KERN_OSREV=3,
 KERN_VERSION=4,
 KERN_SECUREMASK=5,
 KERN_PROF=6,
 KERN_NODENAME=7,
 KERN_DOMAINNAME=8,

 KERN_PANIC=15,
 KERN_REALROOTDEV=16,

 KERN_SPARC_REBOOT=21,
 KERN_CTLALTDEL=22,
 KERN_PRINTK=23,
 KERN_NAMETRANS=24,
 KERN_PPC_HTABRECLAIM=25,
 KERN_PPC_ZEROPAGED=26,
 KERN_PPC_POWERSAVE_NAP=27,
 KERN_MODPROBE=28,
 KERN_SG_BIG_BUFF=29,
 KERN_ACCT=30,
 KERN_PPC_L2CR=31,

 KERN_RTSIGNR=32,
 KERN_RTSIGMAX=33,

 KERN_SHMMAX=34,
 KERN_MSGMAX=35,
 KERN_MSGMNB=36,
 KERN_MSGPOOL=37,
 KERN_SYSRQ=38,
 KERN_MAX_THREADS=39,
  KERN_RANDOM=40,
  KERN_SHMALL=41,
  KERN_MSGMNI=42,
  KERN_SEM=43,
  KERN_SPARC_STOP_A=44,
  KERN_SHMMNI=45,
 KERN_OVERFLOWUID=46,
 KERN_OVERFLOWGID=47,
 KERN_SHMPATH=48,
 KERN_HOTPLUG=49,
 KERN_IEEE_EMULATION_WARNINGS=50,
 KERN_S390_USER_DEBUG_LOGGING=51,
 KERN_CORE_USES_PID=52,
 KERN_TAINTED=53,
 KERN_CADPID=54,
 KERN_PIDMAX=55,
   KERN_CORE_PATTERN=56,
 KERN_PANIC_ON_OOPS=57,
 KERN_HPPA_PWRSW=58,
 KERN_HPPA_UNALIGNED=59,
 KERN_PRINTK_RATELIMIT=60,
 KERN_PRINTK_RATELIMIT_BURST=61,
 KERN_PTY=62,
 KERN_NGROUPS_MAX=63,
 KERN_SPARC_SCONS_PWROFF=64,
 KERN_HZ_TIMER=65,
 KERN_UNKNOWN_NMI_PANIC=66,
 KERN_BOOTLOADER_TYPE=67,
 KERN_RANDOMIZE=68,
 KERN_SETUID_DUMPABLE=69,
 KERN_SPIN_RETRY=70,
 KERN_ACPI_VIDEO_FLAGS=71,
 KERN_IA64_UNALIGNED=72,
 KERN_COMPAT_LOG=73,
 KERN_MAX_LOCK_DEPTH=74,
 KERN_NMI_WATCHDOG=75,
 KERN_PANIC_ON_NMI=76,
 KERN_PANIC_ON_WARN=77,
 KERN_PANIC_PRINT=78,
};




enum
{
 VM_UNUSED1=1,
 VM_UNUSED2=2,
 VM_UNUSED3=3,
 VM_UNUSED4=4,
 VM_OVERCOMMIT_MEMORY=5,
 VM_UNUSED5=6,
 VM_UNUSED7=7,
 VM_UNUSED8=8,
 VM_UNUSED9=9,
 VM_PAGE_CLUSTER=10,
 VM_DIRTY_BACKGROUND=11,
 VM_DIRTY_RATIO=12,
 VM_DIRTY_WB_CS=13,
 VM_DIRTY_EXPIRE_CS=14,
 VM_NR_PDFLUSH_THREADS=15,
 VM_OVERCOMMIT_RATIO=16,
 VM_PAGEBUF=17,
 VM_HUGETLB_PAGES=18,
 VM_SWAPPINESS=19,
 VM_LOWMEM_RESERVE_RATIO=20,
 VM_MIN_FREE_KBYTES=21,
 VM_MAX_MAP_COUNT=22,
 VM_LAPTOP_MODE=23,
 VM_BLOCK_DUMP=24,
 VM_HUGETLB_GROUP=25,
 VM_VFS_CACHE_PRESSURE=26,
 VM_LEGACY_VA_LAYOUT=27,
 VM_SWAP_TOKEN_TIMEOUT=28,
 VM_DROP_PAGECACHE=29,
 VM_PERCPU_PAGELIST_FRACTION=30,
 VM_ZONE_RECLAIM_MODE=31,
 VM_MIN_UNMAPPED=32,
 VM_PANIC_ON_OOM=33,
 VM_VDSO_ENABLED=34,
 VM_MIN_SLAB=35,
};



enum
{
 NET_CORE=1,
 NET_ETHER=2,
 NET_802=3,
 NET_UNIX=4,
 NET_IPV4=5,
 NET_IPX=6,
 NET_ATALK=7,
 NET_NETROM=8,
 NET_AX25=9,
 NET_BRIDGE=10,
 NET_ROSE=11,
 NET_IPV6=12,
 NET_X25=13,
 NET_TR=14,
 NET_DECNET=15,
 NET_ECONET=16,
 NET_SCTP=17,
 NET_LLC=18,
 NET_NETFILTER=19,
 NET_DCCP=20,
 NET_IRDA=412,
};


enum
{
 RANDOM_POOLSIZE=1,
 RANDOM_ENTROPY_COUNT=2,
 RANDOM_READ_THRESH=3,
 RANDOM_WRITE_THRESH=4,
 RANDOM_BOOT_ID=5,
 RANDOM_UUID=6
};


enum
{
 PTY_MAX=1,
 PTY_NR=2
};


enum
{
 BUS_ISA_MEM_BASE=1,
 BUS_ISA_PORT_BASE=2,
 BUS_ISA_PORT_SHIFT=3
};


enum
{
 NET_CORE_WMEM_MAX=1,
 NET_CORE_RMEM_MAX=2,
 NET_CORE_WMEM_DEFAULT=3,
 NET_CORE_RMEM_DEFAULT=4,

 NET_CORE_MAX_BACKLOG=6,
 NET_CORE_FASTROUTE=7,
 NET_CORE_MSG_COST=8,
 NET_CORE_MSG_BURST=9,
 NET_CORE_OPTMEM_MAX=10,
 NET_CORE_HOT_LIST_LENGTH=11,
 NET_CORE_DIVERT_VERSION=12,
 NET_CORE_NO_CONG_THRESH=13,
 NET_CORE_NO_CONG=14,
 NET_CORE_LO_CONG=15,
 NET_CORE_MOD_CONG=16,
 NET_CORE_DEV_WEIGHT=17,
 NET_CORE_SOMAXCONN=18,
 NET_CORE_BUDGET=19,
 NET_CORE_AEVENT_ETIME=20,
 NET_CORE_AEVENT_RSEQTH=21,
 NET_CORE_WARNINGS=22,
};







enum
{
 NET_UNIX_DESTROY_DELAY=1,
 NET_UNIX_DELETE_DELAY=2,
 NET_UNIX_MAX_DGRAM_QLEN=3,
};


enum
{
 NET_NF_CONNTRACK_MAX=1,
 NET_NF_CONNTRACK_TCP_TIMEOUT_SYN_SENT=2,
 NET_NF_CONNTRACK_TCP_TIMEOUT_SYN_RECV=3,
 NET_NF_CONNTRACK_TCP_TIMEOUT_ESTABLISHED=4,
 NET_NF_CONNTRACK_TCP_TIMEOUT_FIN_WAIT=5,
 NET_NF_CONNTRACK_TCP_TIMEOUT_CLOSE_WAIT=6,
 NET_NF_CONNTRACK_TCP_TIMEOUT_LAST_ACK=7,
 NET_NF_CONNTRACK_TCP_TIMEOUT_TIME_WAIT=8,
 NET_NF_CONNTRACK_TCP_TIMEOUT_CLOSE=9,
 NET_NF_CONNTRACK_UDP_TIMEOUT=10,
 NET_NF_CONNTRACK_UDP_TIMEOUT_STREAM=11,
 NET_NF_CONNTRACK_ICMP_TIMEOUT=12,
 NET_NF_CONNTRACK_GENERIC_TIMEOUT=13,
 NET_NF_CONNTRACK_BUCKETS=14,
 NET_NF_CONNTRACK_LOG_INVALID=15,
 NET_NF_CONNTRACK_TCP_TIMEOUT_MAX_RETRANS=16,
 NET_NF_CONNTRACK_TCP_LOOSE=17,
 NET_NF_CONNTRACK_TCP_BE_LIBERAL=18,
 NET_NF_CONNTRACK_TCP_MAX_RETRANS=19,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_CLOSED=20,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_WAIT=21,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_ECHOED=22,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_ESTABLISHED=23,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_SENT=24,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_RECD=25,
 NET_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_ACK_SENT=26,
 NET_NF_CONNTRACK_COUNT=27,
 NET_NF_CONNTRACK_ICMPV6_TIMEOUT=28,
 NET_NF_CONNTRACK_FRAG6_TIMEOUT=29,
 NET_NF_CONNTRACK_FRAG6_LOW_THRESH=30,
 NET_NF_CONNTRACK_FRAG6_HIGH_THRESH=31,
 NET_NF_CONNTRACK_CHECKSUM=32,
};


enum
{

 NET_IPV4_FORWARD=8,
 NET_IPV4_DYNADDR=9,

 NET_IPV4_CONF=16,
 NET_IPV4_NEIGH=17,
 NET_IPV4_ROUTE=18,
 NET_IPV4_FIB_HASH=19,
 NET_IPV4_NETFILTER=20,

 NET_IPV4_TCP_TIMESTAMPS=33,
 NET_IPV4_TCP_WINDOW_SCALING=34,
 NET_IPV4_TCP_SACK=35,
 NET_IPV4_TCP_RETRANS_COLLAPSE=36,
 NET_IPV4_DEFAULT_TTL=37,
 NET_IPV4_AUTOCONFIG=38,
 NET_IPV4_NO_PMTU_DISC=39,
 NET_IPV4_TCP_SYN_RETRIES=40,
 NET_IPV4_IPFRAG_HIGH_THRESH=41,
 NET_IPV4_IPFRAG_LOW_THRESH=42,
 NET_IPV4_IPFRAG_TIME=43,
 NET_IPV4_TCP_MAX_KA_PROBES=44,
 NET_IPV4_TCP_KEEPALIVE_TIME=45,
 NET_IPV4_TCP_KEEPALIVE_PROBES=46,
 NET_IPV4_TCP_RETRIES1=47,
 NET_IPV4_TCP_RETRIES2=48,
 NET_IPV4_TCP_FIN_TIMEOUT=49,
 NET_IPV4_IP_MASQ_DEBUG=50,
 NET_TCP_SYNCOOKIES=51,
 NET_TCP_STDURG=52,
 NET_TCP_RFC1337=53,
 NET_TCP_SYN_TAILDROP=54,
 NET_TCP_MAX_SYN_BACKLOG=55,
 NET_IPV4_LOCAL_PORT_RANGE=56,
 NET_IPV4_ICMP_ECHO_IGNORE_ALL=57,
 NET_IPV4_ICMP_ECHO_IGNORE_BROADCASTS=58,
 NET_IPV4_ICMP_SOURCEQUENCH_RATE=59,
 NET_IPV4_ICMP_DESTUNREACH_RATE=60,
 NET_IPV4_ICMP_TIMEEXCEED_RATE=61,
 NET_IPV4_ICMP_PARAMPROB_RATE=62,
 NET_IPV4_ICMP_ECHOREPLY_RATE=63,
 NET_IPV4_ICMP_IGNORE_BOGUS_ERROR_RESPONSES=64,
 NET_IPV4_IGMP_MAX_MEMBERSHIPS=65,
 NET_TCP_TW_RECYCLE=66,
 NET_IPV4_ALWAYS_DEFRAG=67,
 NET_IPV4_TCP_KEEPALIVE_INTVL=68,
 NET_IPV4_INET_PEER_THRESHOLD=69,
 NET_IPV4_INET_PEER_MINTTL=70,
 NET_IPV4_INET_PEER_MAXTTL=71,
 NET_IPV4_INET_PEER_GC_MINTIME=72,
 NET_IPV4_INET_PEER_GC_MAXTIME=73,
 NET_TCP_ORPHAN_RETRIES=74,
 NET_TCP_ABORT_ON_OVERFLOW=75,
 NET_TCP_SYNACK_RETRIES=76,
 NET_TCP_MAX_ORPHANS=77,
 NET_TCP_MAX_TW_BUCKETS=78,
 NET_TCP_FACK=79,
 NET_TCP_REORDERING=80,
 NET_TCP_ECN=81,
 NET_TCP_DSACK=82,
 NET_TCP_MEM=83,
 NET_TCP_WMEM=84,
 NET_TCP_RMEM=85,
 NET_TCP_APP_WIN=86,
 NET_TCP_ADV_WIN_SCALE=87,
 NET_IPV4_NONLOCAL_BIND=88,
 NET_IPV4_ICMP_RATELIMIT=89,
 NET_IPV4_ICMP_RATEMASK=90,
 NET_TCP_TW_REUSE=91,
 NET_TCP_FRTO=92,
 NET_TCP_LOW_LATENCY=93,
 NET_IPV4_IPFRAG_SECRET_INTERVAL=94,
 NET_IPV4_IGMP_MAX_MSF=96,
 NET_TCP_NO_METRICS_SAVE=97,
 NET_TCP_DEFAULT_WIN_SCALE=105,
 NET_TCP_MODERATE_RCVBUF=106,
 NET_TCP_TSO_WIN_DIVISOR=107,
 NET_TCP_BIC_BETA=108,
 NET_IPV4_ICMP_ERRORS_USE_INBOUND_IFADDR=109,
 NET_TCP_CONG_CONTROL=110,
 NET_TCP_ABC=111,
 NET_IPV4_IPFRAG_MAX_DIST=112,
  NET_TCP_MTU_PROBING=113,
 NET_TCP_BASE_MSS=114,
 NET_IPV4_TCP_WORKAROUND_SIGNED_WINDOWS=115,
 NET_TCP_DMA_COPYBREAK=116,
 NET_TCP_SLOW_START_AFTER_IDLE=117,
 NET_CIPSOV4_CACHE_ENABLE=118,
 NET_CIPSOV4_CACHE_BUCKET_SIZE=119,
 NET_CIPSOV4_RBM_OPTFMT=120,
 NET_CIPSOV4_RBM_STRICTVALID=121,
 NET_TCP_AVAIL_CONG_CONTROL=122,
 NET_TCP_ALLOWED_CONG_CONTROL=123,
 NET_TCP_MAX_SSTHRESH=124,
 NET_TCP_FRTO_RESPONSE=125,
};

enum {
 NET_IPV4_ROUTE_FLUSH=1,
 NET_IPV4_ROUTE_MIN_DELAY=2,
 NET_IPV4_ROUTE_MAX_DELAY=3,
 NET_IPV4_ROUTE_GC_THRESH=4,
 NET_IPV4_ROUTE_MAX_SIZE=5,
 NET_IPV4_ROUTE_GC_MIN_INTERVAL=6,
 NET_IPV4_ROUTE_GC_TIMEOUT=7,
 NET_IPV4_ROUTE_GC_INTERVAL=8,
 NET_IPV4_ROUTE_REDIRECT_LOAD=9,
 NET_IPV4_ROUTE_REDIRECT_NUMBER=10,
 NET_IPV4_ROUTE_REDIRECT_SILENCE=11,
 NET_IPV4_ROUTE_ERROR_COST=12,
 NET_IPV4_ROUTE_ERROR_BURST=13,
 NET_IPV4_ROUTE_GC_ELASTICITY=14,
 NET_IPV4_ROUTE_MTU_EXPIRES=15,
 NET_IPV4_ROUTE_MIN_PMTU=16,
 NET_IPV4_ROUTE_MIN_ADVMSS=17,
 NET_IPV4_ROUTE_SECRET_INTERVAL=18,
 NET_IPV4_ROUTE_GC_MIN_INTERVAL_MS=19,
};

enum
{
 NET_PROTO_CONF_ALL=-2,
 NET_PROTO_CONF_DEFAULT=-3


};

enum
{
 NET_IPV4_CONF_FORWARDING=1,
 NET_IPV4_CONF_MC_FORWARDING=2,
 NET_IPV4_CONF_PROXY_ARP=3,
 NET_IPV4_CONF_ACCEPT_REDIRECTS=4,
 NET_IPV4_CONF_SECURE_REDIRECTS=5,
 NET_IPV4_CONF_SEND_REDIRECTS=6,
 NET_IPV4_CONF_SHARED_MEDIA=7,
 NET_IPV4_CONF_RP_FILTER=8,
 NET_IPV4_CONF_ACCEPT_SOURCE_ROUTE=9,
 NET_IPV4_CONF_BOOTP_RELAY=10,
 NET_IPV4_CONF_LOG_MARTIANS=11,
 NET_IPV4_CONF_TAG=12,
 NET_IPV4_CONF_ARPFILTER=13,
 NET_IPV4_CONF_MEDIUM_ID=14,
 NET_IPV4_CONF_NOXFRM=15,
 NET_IPV4_CONF_NOPOLICY=16,
 NET_IPV4_CONF_FORCE_IGMP_VERSION=17,
 NET_IPV4_CONF_ARP_ANNOUNCE=18,
 NET_IPV4_CONF_ARP_IGNORE=19,
 NET_IPV4_CONF_PROMOTE_SECONDARIES=20,
 NET_IPV4_CONF_ARP_ACCEPT=21,
 NET_IPV4_CONF_ARP_NOTIFY=22,
 NET_IPV4_CONF_ARP_EVICT_NOCARRIER=23,
};


enum
{
 NET_IPV4_NF_CONNTRACK_MAX=1,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_SYN_SENT=2,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_SYN_RECV=3,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_ESTABLISHED=4,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_FIN_WAIT=5,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_CLOSE_WAIT=6,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_LAST_ACK=7,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_TIME_WAIT=8,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_CLOSE=9,
 NET_IPV4_NF_CONNTRACK_UDP_TIMEOUT=10,
 NET_IPV4_NF_CONNTRACK_UDP_TIMEOUT_STREAM=11,
 NET_IPV4_NF_CONNTRACK_ICMP_TIMEOUT=12,
 NET_IPV4_NF_CONNTRACK_GENERIC_TIMEOUT=13,
 NET_IPV4_NF_CONNTRACK_BUCKETS=14,
 NET_IPV4_NF_CONNTRACK_LOG_INVALID=15,
 NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_MAX_RETRANS=16,
 NET_IPV4_NF_CONNTRACK_TCP_LOOSE=17,
 NET_IPV4_NF_CONNTRACK_TCP_BE_LIBERAL=18,
 NET_IPV4_NF_CONNTRACK_TCP_MAX_RETRANS=19,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_CLOSED=20,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_WAIT=21,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_ECHOED=22,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_ESTABLISHED=23,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_SENT=24,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_RECD=25,
  NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_ACK_SENT=26,
 NET_IPV4_NF_CONNTRACK_COUNT=27,
 NET_IPV4_NF_CONNTRACK_CHECKSUM=28,
};


enum {
 NET_IPV6_CONF=16,
 NET_IPV6_NEIGH=17,
 NET_IPV6_ROUTE=18,
 NET_IPV6_ICMP=19,
 NET_IPV6_BINDV6ONLY=20,
 NET_IPV6_IP6FRAG_HIGH_THRESH=21,
 NET_IPV6_IP6FRAG_LOW_THRESH=22,
 NET_IPV6_IP6FRAG_TIME=23,
 NET_IPV6_IP6FRAG_SECRET_INTERVAL=24,
 NET_IPV6_MLD_MAX_MSF=25,
};

enum {
 NET_IPV6_ROUTE_FLUSH=1,
 NET_IPV6_ROUTE_GC_THRESH=2,
 NET_IPV6_ROUTE_MAX_SIZE=3,
 NET_IPV6_ROUTE_GC_MIN_INTERVAL=4,
 NET_IPV6_ROUTE_GC_TIMEOUT=5,
 NET_IPV6_ROUTE_GC_INTERVAL=6,
 NET_IPV6_ROUTE_GC_ELASTICITY=7,
 NET_IPV6_ROUTE_MTU_EXPIRES=8,
 NET_IPV6_ROUTE_MIN_ADVMSS=9,
 NET_IPV6_ROUTE_GC_MIN_INTERVAL_MS=10
};

enum {
 NET_IPV6_FORWARDING=1,
 NET_IPV6_HOP_LIMIT=2,
 NET_IPV6_MTU=3,
 NET_IPV6_ACCEPT_RA=4,
 NET_IPV6_ACCEPT_REDIRECTS=5,
 NET_IPV6_AUTOCONF=6,
 NET_IPV6_DAD_TRANSMITS=7,
 NET_IPV6_RTR_SOLICITS=8,
 NET_IPV6_RTR_SOLICIT_INTERVAL=9,
 NET_IPV6_RTR_SOLICIT_DELAY=10,
 NET_IPV6_USE_TEMPADDR=11,
 NET_IPV6_TEMP_VALID_LFT=12,
 NET_IPV6_TEMP_PREFERED_LFT=13,
 NET_IPV6_REGEN_MAX_RETRY=14,
 NET_IPV6_MAX_DESYNC_FACTOR=15,
 NET_IPV6_MAX_ADDRESSES=16,
 NET_IPV6_FORCE_MLD_VERSION=17,
 NET_IPV6_ACCEPT_RA_DEFRTR=18,
 NET_IPV6_ACCEPT_RA_PINFO=19,
 NET_IPV6_ACCEPT_RA_RTR_PREF=20,
 NET_IPV6_RTR_PROBE_INTERVAL=21,
 NET_IPV6_ACCEPT_RA_RT_INFO_MAX_PLEN=22,
 NET_IPV6_PROXY_NDP=23,
 NET_IPV6_ACCEPT_SOURCE_ROUTE=25,
 NET_IPV6_ACCEPT_RA_FROM_LOCAL=26,
 NET_IPV6_ACCEPT_RA_RT_INFO_MIN_PLEN=27,
 NET_IPV6_RA_DEFRTR_METRIC=28,
 __NET_IPV6_MAX
};


enum {
 NET_IPV6_ICMP_RATELIMIT = 1,
 NET_IPV6_ICMP_ECHO_IGNORE_ALL = 2
};


enum {
 NET_NEIGH_MCAST_SOLICIT = 1,
 NET_NEIGH_UCAST_SOLICIT = 2,
 NET_NEIGH_APP_SOLICIT = 3,
 NET_NEIGH_RETRANS_TIME = 4,
 NET_NEIGH_REACHABLE_TIME = 5,
 NET_NEIGH_DELAY_PROBE_TIME = 6,
 NET_NEIGH_GC_STALE_TIME = 7,
 NET_NEIGH_UNRES_QLEN = 8,
 NET_NEIGH_PROXY_QLEN = 9,
 NET_NEIGH_ANYCAST_DELAY = 10,
 NET_NEIGH_PROXY_DELAY = 11,
 NET_NEIGH_LOCKTIME = 12,
 NET_NEIGH_GC_INTERVAL = 13,
 NET_NEIGH_GC_THRESH1 = 14,
 NET_NEIGH_GC_THRESH2 = 15,
 NET_NEIGH_GC_THRESH3 = 16,
 NET_NEIGH_RETRANS_TIME_MS = 17,
 NET_NEIGH_REACHABLE_TIME_MS = 18,
 NET_NEIGH_INTERVAL_PROBE_TIME_MS = 19,
};


enum {
 NET_DCCP_DEFAULT=1,
};


enum {
 NET_IPX_PPROP_BROADCASTING=1,
 NET_IPX_FORWARDING=2
};


enum {
 NET_LLC2=1,
 NET_LLC_STATION=2,
};


enum {
 NET_LLC2_TIMEOUT=1,
};


enum {
 NET_LLC_STATION_ACK_TIMEOUT=1,
};


enum {
 NET_LLC2_ACK_TIMEOUT=1,
 NET_LLC2_P_TIMEOUT=2,
 NET_LLC2_REJ_TIMEOUT=3,
 NET_LLC2_BUSY_TIMEOUT=4,
};


enum {
 NET_ATALK_AARP_EXPIRY_TIME=1,
 NET_ATALK_AARP_TICK_TIME=2,
 NET_ATALK_AARP_RETRANSMIT_LIMIT=3,
 NET_ATALK_AARP_RESOLVE_TIME=4
};



enum {
 NET_NETROM_DEFAULT_PATH_QUALITY=1,
 NET_NETROM_OBSOLESCENCE_COUNT_INITIALISER=2,
 NET_NETROM_NETWORK_TTL_INITIALISER=3,
 NET_NETROM_TRANSPORT_TIMEOUT=4,
 NET_NETROM_TRANSPORT_MAXIMUM_TRIES=5,
 NET_NETROM_TRANSPORT_ACKNOWLEDGE_DELAY=6,
 NET_NETROM_TRANSPORT_BUSY_DELAY=7,
 NET_NETROM_TRANSPORT_REQUESTED_WINDOW_SIZE=8,
 NET_NETROM_TRANSPORT_NO_ACTIVITY_TIMEOUT=9,
 NET_NETROM_ROUTING_CONTROL=10,
 NET_NETROM_LINK_FAILS_COUNT=11,
 NET_NETROM_RESET=12
};


enum {
 NET_AX25_IP_DEFAULT_MODE=1,
 NET_AX25_DEFAULT_MODE=2,
 NET_AX25_BACKOFF_TYPE=3,
 NET_AX25_CONNECT_MODE=4,
 NET_AX25_STANDARD_WINDOW=5,
 NET_AX25_EXTENDED_WINDOW=6,
 NET_AX25_T1_TIMEOUT=7,
 NET_AX25_T2_TIMEOUT=8,
 NET_AX25_T3_TIMEOUT=9,
 NET_AX25_IDLE_TIMEOUT=10,
 NET_AX25_N2=11,
 NET_AX25_PACLEN=12,
 NET_AX25_PROTOCOL=13,
 NET_AX25_DAMA_SLAVE_TIMEOUT=14
};


enum {
 NET_ROSE_RESTART_REQUEST_TIMEOUT=1,
 NET_ROSE_CALL_REQUEST_TIMEOUT=2,
 NET_ROSE_RESET_REQUEST_TIMEOUT=3,
 NET_ROSE_CLEAR_REQUEST_TIMEOUT=4,
 NET_ROSE_ACK_HOLD_BACK_TIMEOUT=5,
 NET_ROSE_ROUTING_CONTROL=6,
 NET_ROSE_LINK_FAIL_TIMEOUT=7,
 NET_ROSE_MAX_VCS=8,
 NET_ROSE_WINDOW_SIZE=9,
 NET_ROSE_NO_ACTIVITY_TIMEOUT=10
};


enum {
 NET_X25_RESTART_REQUEST_TIMEOUT=1,
 NET_X25_CALL_REQUEST_TIMEOUT=2,
 NET_X25_RESET_REQUEST_TIMEOUT=3,
 NET_X25_CLEAR_REQUEST_TIMEOUT=4,
 NET_X25_ACK_HOLD_BACK_TIMEOUT=5,
 NET_X25_FORWARD=6
};


enum
{
 NET_TR_RIF_TIMEOUT=1
};


enum {
 NET_DECNET_NODE_TYPE = 1,
 NET_DECNET_NODE_ADDRESS = 2,
 NET_DECNET_NODE_NAME = 3,
 NET_DECNET_DEFAULT_DEVICE = 4,
 NET_DECNET_TIME_WAIT = 5,
 NET_DECNET_DN_COUNT = 6,
 NET_DECNET_DI_COUNT = 7,
 NET_DECNET_DR_COUNT = 8,
 NET_DECNET_DST_GC_INTERVAL = 9,
 NET_DECNET_CONF = 10,
 NET_DECNET_NO_FC_MAX_CWND = 11,
 NET_DECNET_MEM = 12,
 NET_DECNET_RMEM = 13,
 NET_DECNET_WMEM = 14,
 NET_DECNET_DEBUG_LEVEL = 255
};


enum {
 NET_DECNET_CONF_LOOPBACK = -2,
 NET_DECNET_CONF_DDCMP = -3,
 NET_DECNET_CONF_PPP = -4,
 NET_DECNET_CONF_X25 = -5,
 NET_DECNET_CONF_GRE = -6,
 NET_DECNET_CONF_ETHER = -7


};


enum {
 NET_DECNET_CONF_DEV_PRIORITY = 1,
 NET_DECNET_CONF_DEV_T1 = 2,
 NET_DECNET_CONF_DEV_T2 = 3,
 NET_DECNET_CONF_DEV_T3 = 4,
 NET_DECNET_CONF_DEV_FORWARDING = 5,
 NET_DECNET_CONF_DEV_BLKSIZE = 6,
 NET_DECNET_CONF_DEV_STATE = 7
};


enum {
 NET_SCTP_RTO_INITIAL = 1,
 NET_SCTP_RTO_MIN = 2,
 NET_SCTP_RTO_MAX = 3,
 NET_SCTP_RTO_ALPHA = 4,
 NET_SCTP_RTO_BETA = 5,
 NET_SCTP_VALID_COOKIE_LIFE = 6,
 NET_SCTP_ASSOCIATION_MAX_RETRANS = 7,
 NET_SCTP_PATH_MAX_RETRANS = 8,
 NET_SCTP_MAX_INIT_RETRANSMITS = 9,
 NET_SCTP_HB_INTERVAL = 10,
 NET_SCTP_PRESERVE_ENABLE = 11,
 NET_SCTP_MAX_BURST = 12,
 NET_SCTP_ADDIP_ENABLE = 13,
 NET_SCTP_PRSCTP_ENABLE = 14,
 NET_SCTP_SNDBUF_POLICY = 15,
 NET_SCTP_SACK_TIMEOUT = 16,
 NET_SCTP_RCVBUF_POLICY = 17,
};


enum {
 NET_BRIDGE_NF_CALL_ARPTABLES = 1,
 NET_BRIDGE_NF_CALL_IPTABLES = 2,
 NET_BRIDGE_NF_CALL_IP6TABLES = 3,
 NET_BRIDGE_NF_FILTER_VLAN_TAGGED = 4,
 NET_BRIDGE_NF_FILTER_PPPOE_TAGGED = 5,
};



enum
{
 FS_NRINODE=1,
 FS_STATINODE=2,
 FS_MAXINODE=3,
 FS_NRDQUOT=4,
 FS_MAXDQUOT=5,
 FS_NRFILE=6,
 FS_MAXFILE=7,
 FS_DENTRY=8,
 FS_NRSUPER=9,
 FS_MAXSUPER=10,
 FS_OVERFLOWUID=11,
 FS_OVERFLOWGID=12,
 FS_LEASES=13,
 FS_DIR_NOTIFY=14,
 FS_LEASE_TIME=15,
 FS_DQSTATS=16,
 FS_XFS=17,
 FS_AIO_NR=18,
 FS_AIO_MAX_NR=19,
 FS_INOTIFY=20,
 FS_OCFS2=988,
};


enum {
 FS_DQ_LOOKUPS = 1,
 FS_DQ_DROPS = 2,
 FS_DQ_READS = 3,
 FS_DQ_WRITES = 4,
 FS_DQ_CACHE_HITS = 5,
 FS_DQ_ALLOCATED = 6,
 FS_DQ_FREE = 7,
 FS_DQ_SYNCS = 8,
 FS_DQ_WARNINGS = 9,
};




enum {
 DEV_CDROM=1,
 DEV_HWMON=2,
 DEV_PARPORT=3,
 DEV_RAID=4,
 DEV_MAC_HID=5,
 DEV_SCSI=6,
 DEV_IPMI=7,
};


enum {
 DEV_CDROM_INFO=1,
 DEV_CDROM_AUTOCLOSE=2,
 DEV_CDROM_AUTOEJECT=3,
 DEV_CDROM_DEBUG=4,
 DEV_CDROM_LOCK=5,
 DEV_CDROM_CHECK_MEDIA=6
};


enum {
 DEV_PARPORT_DEFAULT=-3
};


enum {
 DEV_RAID_SPEED_LIMIT_MIN=1,
 DEV_RAID_SPEED_LIMIT_MAX=2
};


enum {
 DEV_PARPORT_DEFAULT_TIMESLICE=1,
 DEV_PARPORT_DEFAULT_SPINTIME=2
};


enum {
 DEV_PARPORT_SPINTIME=1,
 DEV_PARPORT_BASE_ADDR=2,
 DEV_PARPORT_IRQ=3,
 DEV_PARPORT_DMA=4,
 DEV_PARPORT_MODES=5,
 DEV_PARPORT_DEVICES=6,
 DEV_PARPORT_AUTOPROBE=16
};


enum {
 DEV_PARPORT_DEVICES_ACTIVE=-3,
};


enum {
 DEV_PARPORT_DEVICE_TIMESLICE=1,
};


enum {
 DEV_MAC_HID_KEYBOARD_SENDS_LINUX_KEYCODES=1,
 DEV_MAC_HID_KEYBOARD_LOCK_KEYCODES=2,
 DEV_MAC_HID_MOUSE_BUTTON_EMULATION=3,
 DEV_MAC_HID_MOUSE_BUTTON2_KEYCODE=4,
 DEV_MAC_HID_MOUSE_BUTTON3_KEYCODE=5,
 DEV_MAC_HID_ADB_MOUSE_SENDS_KEYCODES=6
};


enum {
 DEV_SCSI_LOGGING_LEVEL=1,
};


enum {
 DEV_IPMI_POWEROFF_POWERCYCLE=1,
};


enum
{
 ABI_DEFHANDLER_COFF=1,
 ABI_DEFHANDLER_ELF=2,
 ABI_DEFHANDLER_LCALL7=3,
 ABI_DEFHANDLER_LIBCSO=4,
 ABI_TRACE=5,
 ABI_FAKE_UTSNAME=6,
};
# 31 "../include/linux/sysctl.h" 2


struct completion;
struct ctl_table;
struct nsproxy;
struct ctl_table_root;
struct ctl_table_header;
struct ctl_dir;
# 56 "../include/linux/sysctl.h"
extern const int sysctl_vals[];





extern const unsigned long sysctl_long_vals[];

typedef int proc_handler(const struct ctl_table *ctl, int write, void *buffer,
  size_t *lenp, loff_t *ppos);

int proc_dostring(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_dobool(const struct ctl_table *table, int write, void *buffer,
  size_t *lenp, loff_t *ppos);
int proc_dointvec(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_douintvec(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_dointvec_minmax(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_douintvec_minmax(const struct ctl_table *table, int write, void *buffer,
  size_t *lenp, loff_t *ppos);
int proc_dou8vec_minmax(const struct ctl_table *table, int write, void *buffer,
   size_t *lenp, loff_t *ppos);
int proc_dointvec_jiffies(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_dointvec_ms_jiffies_minmax(const struct ctl_table *table, int write,
  void *buffer, size_t *lenp, loff_t *ppos);
int proc_dointvec_userhz_jiffies(const struct ctl_table *, int, void *, size_t *,
  loff_t *);
int proc_dointvec_ms_jiffies(const struct ctl_table *, int, void *, size_t *,
  loff_t *);
int proc_doulongvec_minmax(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_doulongvec_ms_jiffies_minmax(const struct ctl_table *table, int, void *,
  size_t *, loff_t *);
int proc_do_large_bitmap(const struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_do_static_key(const struct ctl_table *table, int write, void *buffer,
  size_t *lenp, loff_t *ppos);
# 117 "../include/linux/sysctl.h"
struct ctl_table_poll {
 atomic_t event;
 wait_queue_head_t wait;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *proc_sys_poll_event(struct ctl_table_poll *poll)
{
 return (void *)(unsigned long)atomic_read(&poll->event);
}
# 135 "../include/linux/sysctl.h"
struct ctl_table {
 const char *procname;
 void *data;
 int maxlen;
 umode_t mode;
 proc_handler *proc_handler;
 struct ctl_table_poll *poll;
 void *extra1;
 void *extra2;
} ;

struct ctl_node {
 struct rb_node node;
 struct ctl_table_header *header;
};
# 162 "../include/linux/sysctl.h"
struct ctl_table_header {
 union {
  struct {
   struct ctl_table *ctl_table;
   int ctl_table_size;
   int used;
   int count;
   int nreg;
  };
  struct callback_head rcu;
 };
 struct completion *unregistering;
 const struct ctl_table *ctl_table_arg;
 struct ctl_table_root *root;
 struct ctl_table_set *set;
 struct ctl_dir *parent;
 struct ctl_node *node;
 struct hlist_head inodes;







 enum {
  SYSCTL_TABLE_TYPE_DEFAULT,
  SYSCTL_TABLE_TYPE_PERMANENTLY_EMPTY,
 } type;
};

struct ctl_dir {

 struct ctl_table_header header;
 struct rb_root root;
};

struct ctl_table_set {
 int (*is_seen)(struct ctl_table_set *);
 struct ctl_dir dir;
};

struct ctl_table_root {
 struct ctl_table_set default_set;
 struct ctl_table_set *(*lookup)(struct ctl_table_root *root);
 void (*set_ownership)(struct ctl_table_header *head,
         kuid_t *uid, kgid_t *gid);
 int (*permissions)(struct ctl_table_header *head, const struct ctl_table *table);
};






void proc_sys_poll_notify(struct ctl_table_poll *poll);

extern void setup_sysctl_set(struct ctl_table_set *p,
 struct ctl_table_root *root,
 int (*is_seen)(struct ctl_table_set *));
extern void retire_sysctl_set(struct ctl_table_set *set);

struct ctl_table_header *__register_sysctl_table(
 struct ctl_table_set *set,
 const char *path, struct ctl_table *table, size_t table_size);
struct ctl_table_header *register_sysctl_sz(const char *path, struct ctl_table *table,
         size_t table_size);
void unregister_sysctl_table(struct ctl_table_header * table);

extern int sysctl_init_bases(void);
extern void __register_sysctl_init(const char *path, struct ctl_table *table,
     const char *table_name, size_t table_size);


extern struct ctl_table_header *register_sysctl_mount_point(const char *path);

void do_sysctl_args(void);
bool sysctl_is_alias(char *param);
int do_proc_douintvec(const struct ctl_table *table, int write,
        void *buffer, size_t *lenp, loff_t *ppos,
        int (*conv)(unsigned long *lvalp,
      unsigned int *valp,
      int write, void *data),
        void *data);

extern int pwrsw_enabled;
extern int unaligned_enabled;
extern int unaligned_dump_stack;
extern int no_unaligned_warning;
# 290 "../include/linux/sysctl.h"
int sysctl_max_threads(const struct ctl_table *table, int write, void *buffer,
  size_t *lenp, loff_t *ppos);
# 10 "../include/linux/umh.h" 2

struct cred;
struct file;







struct subprocess_info {
 struct work_struct work;
 struct completion *complete;
 const char *path;
 char **argv;
 char **envp;
 int wait;
 int retval;
 int (*init)(struct subprocess_info *info, struct cred *new);
 void (*cleanup)(struct subprocess_info *info);
 void *data;
} ;

extern int
call_usermodehelper(const char *path, char **argv, char **envp, int wait);

extern struct subprocess_info *
call_usermodehelper_setup(const char *path, char **argv, char **envp,
     gfp_t gfp_mask,
     int (*init)(struct subprocess_info *info, struct cred *new),
     void (*cleanup)(struct subprocess_info *), void *data);

extern int
call_usermodehelper_exec(struct subprocess_info *info, int wait);

enum umh_disable_depth {
 UMH_ENABLED = 0,
 UMH_FREEZING,
 UMH_DISABLED,
};

extern int __usermodehelper_disable(enum umh_disable_depth depth);
extern void __usermodehelper_set_disable_depth(enum umh_disable_depth depth);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int usermodehelper_disable(void)
{
 return __usermodehelper_disable(UMH_DISABLED);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void usermodehelper_enable(void)
{
 __usermodehelper_set_disable_depth(UMH_ENABLED);
}

extern int usermodehelper_read_trylock(void);
extern long usermodehelper_read_lock_wait(long timeout);
extern void usermodehelper_read_unlock(void);
# 10 "../include/linux/kmod.h" 2
# 30 "../include/linux/kmod.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int request_module(const char *name, ...) { return -38; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int request_module_nowait(const char *name, ...) { return -38; }
# 18 "../include/linux/module.h" 2

# 1 "../include/linux/elf.h" 1





# 1 "../arch/hexagon/include/asm/elf.h" 1
# 11 "../arch/hexagon/include/asm/elf.h"
# 1 "../arch/hexagon/include/asm/ptrace.h" 1
# 11 "../arch/hexagon/include/asm/ptrace.h"
# 1 "../arch/hexagon/include/uapi/asm/ptrace.h" 1
# 12 "../arch/hexagon/include/asm/ptrace.h" 2


extern int regs_query_register_offset(const char *name);
extern const char *regs_query_register_name(unsigned int offset);
# 12 "../arch/hexagon/include/asm/elf.h" 2

# 1 "../include/uapi/linux/elf-em.h" 1
# 14 "../arch/hexagon/include/asm/elf.h" 2

struct elf32_hdr;
# 78 "../arch/hexagon/include/asm/elf.h"
typedef unsigned long elf_greg_t;

typedef struct user_regs_struct elf_gregset_t;



typedef unsigned long elf_fpregset_t;
# 215 "../arch/hexagon/include/asm/elf.h"
struct linux_binprm;
extern int arch_setup_additional_pages(struct linux_binprm *bprm,
           int uses_interp);
# 7 "../include/linux/elf.h" 2
# 1 "../include/uapi/linux/elf.h" 1








typedef __u32 Elf32_Addr;
typedef __u16 Elf32_Half;
typedef __u32 Elf32_Off;
typedef __s32 Elf32_Sword;
typedef __u32 Elf32_Word;


typedef __u64 Elf64_Addr;
typedef __u16 Elf64_Half;
typedef __s16 Elf64_SHalf;
typedef __u64 Elf64_Off;
typedef __s32 Elf64_Sword;
typedef __u32 Elf64_Word;
typedef __u64 Elf64_Xword;
typedef __s64 Elf64_Sxword;
# 143 "../include/uapi/linux/elf.h"
typedef struct {
  Elf32_Sword d_tag;
  union {
    Elf32_Sword d_val;
    Elf32_Addr d_ptr;
  } d_un;
} Elf32_Dyn;

typedef struct {
  Elf64_Sxword d_tag;
  union {
    Elf64_Xword d_val;
    Elf64_Addr d_ptr;
  } d_un;
} Elf64_Dyn;
# 166 "../include/uapi/linux/elf.h"
typedef struct elf32_rel {
  Elf32_Addr r_offset;
  Elf32_Word r_info;
} Elf32_Rel;

typedef struct elf64_rel {
  Elf64_Addr r_offset;
  Elf64_Xword r_info;
} Elf64_Rel;

typedef struct elf32_rela {
  Elf32_Addr r_offset;
  Elf32_Word r_info;
  Elf32_Sword r_addend;
} Elf32_Rela;

typedef struct elf64_rela {
  Elf64_Addr r_offset;
  Elf64_Xword r_info;
  Elf64_Sxword r_addend;
} Elf64_Rela;

typedef struct elf32_sym {
  Elf32_Word st_name;
  Elf32_Addr st_value;
  Elf32_Word st_size;
  unsigned char st_info;
  unsigned char st_other;
  Elf32_Half st_shndx;
} Elf32_Sym;

typedef struct elf64_sym {
  Elf64_Word st_name;
  unsigned char st_info;
  unsigned char st_other;
  Elf64_Half st_shndx;
  Elf64_Addr st_value;
  Elf64_Xword st_size;
} Elf64_Sym;




typedef struct elf32_hdr {
  unsigned char e_ident[16];
  Elf32_Half e_type;
  Elf32_Half e_machine;
  Elf32_Word e_version;
  Elf32_Addr e_entry;
  Elf32_Off e_phoff;
  Elf32_Off e_shoff;
  Elf32_Word e_flags;
  Elf32_Half e_ehsize;
  Elf32_Half e_phentsize;
  Elf32_Half e_phnum;
  Elf32_Half e_shentsize;
  Elf32_Half e_shnum;
  Elf32_Half e_shstrndx;
} Elf32_Ehdr;

typedef struct elf64_hdr {
  unsigned char e_ident[16];
  Elf64_Half e_type;
  Elf64_Half e_machine;
  Elf64_Word e_version;
  Elf64_Addr e_entry;
  Elf64_Off e_phoff;
  Elf64_Off e_shoff;
  Elf64_Word e_flags;
  Elf64_Half e_ehsize;
  Elf64_Half e_phentsize;
  Elf64_Half e_phnum;
  Elf64_Half e_shentsize;
  Elf64_Half e_shnum;
  Elf64_Half e_shstrndx;
} Elf64_Ehdr;







typedef struct elf32_phdr {
  Elf32_Word p_type;
  Elf32_Off p_offset;
  Elf32_Addr p_vaddr;
  Elf32_Addr p_paddr;
  Elf32_Word p_filesz;
  Elf32_Word p_memsz;
  Elf32_Word p_flags;
  Elf32_Word p_align;
} Elf32_Phdr;

typedef struct elf64_phdr {
  Elf64_Word p_type;
  Elf64_Word p_flags;
  Elf64_Off p_offset;
  Elf64_Addr p_vaddr;
  Elf64_Addr p_paddr;
  Elf64_Xword p_filesz;
  Elf64_Xword p_memsz;
  Elf64_Xword p_align;
} Elf64_Phdr;
# 308 "../include/uapi/linux/elf.h"
typedef struct elf32_shdr {
  Elf32_Word sh_name;
  Elf32_Word sh_type;
  Elf32_Word sh_flags;
  Elf32_Addr sh_addr;
  Elf32_Off sh_offset;
  Elf32_Word sh_size;
  Elf32_Word sh_link;
  Elf32_Word sh_info;
  Elf32_Word sh_addralign;
  Elf32_Word sh_entsize;
} Elf32_Shdr;

typedef struct elf64_shdr {
  Elf64_Word sh_name;
  Elf64_Word sh_type;
  Elf64_Xword sh_flags;
  Elf64_Addr sh_addr;
  Elf64_Off sh_offset;
  Elf64_Xword sh_size;
  Elf64_Word sh_link;
  Elf64_Word sh_info;
  Elf64_Xword sh_addralign;
  Elf64_Xword sh_entsize;
} Elf64_Shdr;
# 463 "../include/uapi/linux/elf.h"
typedef struct elf32_note {
  Elf32_Word n_namesz;
  Elf32_Word n_descsz;
  Elf32_Word n_type;
} Elf32_Nhdr;


typedef struct elf64_note {
  Elf64_Word n_namesz;
  Elf64_Word n_descsz;
  Elf64_Word n_type;
} Elf64_Nhdr;
# 8 "../include/linux/elf.h" 2
# 40 "../include/linux/elf.h"
extern Elf32_Dyn _DYNAMIC [];
# 65 "../include/linux/elf.h"
struct file;
struct coredump_params;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int elf_coredump_extra_notes_size(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int elf_coredump_extra_notes_write(struct coredump_params *cprm) { return 0; }
# 81 "../include/linux/elf.h"
struct gnu_property {
 u32 pr_type;
 u32 pr_datasz;
};

struct arch_elf_state;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_parse_elf_property(u32 type, const void *data,
       size_t datasz, bool compat,
       struct arch_elf_state *arch)
{
 return 0;
}
# 104 "../include/linux/elf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_elf_adjust_prot(int prot,
           const struct arch_elf_state *state,
           bool has_interp, bool is_interp)
{
 return prot;
}
# 20 "../include/linux/module.h" 2


# 1 "../include/linux/moduleparam.h" 1
# 36 "../include/linux/moduleparam.h"
struct kernel_param;






enum {
 KERNEL_PARAM_OPS_FL_NOARG = (1 << 0)
};

struct kernel_param_ops {

 unsigned int flags;

 int (*set)(const char *val, const struct kernel_param *kp);

 int (*get)(char *buffer, const struct kernel_param *kp);

 void (*free)(void *arg);
};







enum {
 KERNEL_PARAM_FL_UNSAFE = (1 << 0),
 KERNEL_PARAM_FL_HWPARAM = (1 << 1),
};

struct kernel_param {
 const char *name;
 struct module *mod;
 const struct kernel_param_ops *ops;
 const u16 perm;
 s8 level;
 u8 flags;
 union {
  void *arg;
  const struct kparam_string *str;
  const struct kparam_array *arr;
 };
};

extern const struct kernel_param __start___param[], __stop___param[];


struct kparam_string {
 unsigned int maxlen;
 char *string;
};


struct kparam_array
{
 unsigned int max;
 unsigned int elemsize;
 unsigned int *num;
 const struct kernel_param_ops *ops;
 void *elem;
};
# 308 "../include/linux/moduleparam.h"
extern void kernel_param_lock(struct module *mod);
extern void kernel_param_unlock(struct module *mod);
# 376 "../include/linux/moduleparam.h"
extern bool parameq(const char *name1, const char *name2);
# 386 "../include/linux/moduleparam.h"
extern bool parameqn(const char *name1, const char *name2, size_t n);

typedef int (*parse_unknown_fn)(char *param, char *val, const char *doing, void *arg);


extern char *parse_args(const char *name,
        char *args,
        const struct kernel_param *params,
        unsigned num,
        s16 level_min,
        s16 level_max,
        void *arg, parse_unknown_fn unknown);



extern void destroy_params(const struct kernel_param *params, unsigned num);
# 415 "../include/linux/moduleparam.h"
extern const struct kernel_param_ops param_ops_byte;
extern int param_set_byte(const char *val, const struct kernel_param *kp);
extern int param_get_byte(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_short;
extern int param_set_short(const char *val, const struct kernel_param *kp);
extern int param_get_short(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_ushort;
extern int param_set_ushort(const char *val, const struct kernel_param *kp);
extern int param_get_ushort(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_int;
extern int param_set_int(const char *val, const struct kernel_param *kp);
extern int param_get_int(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_uint;
extern int param_set_uint(const char *val, const struct kernel_param *kp);
extern int param_get_uint(char *buffer, const struct kernel_param *kp);
int param_set_uint_minmax(const char *val, const struct kernel_param *kp,
  unsigned int min, unsigned int max);


extern const struct kernel_param_ops param_ops_long;
extern int param_set_long(const char *val, const struct kernel_param *kp);
extern int param_get_long(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_ulong;
extern int param_set_ulong(const char *val, const struct kernel_param *kp);
extern int param_get_ulong(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_ullong;
extern int param_set_ullong(const char *val, const struct kernel_param *kp);
extern int param_get_ullong(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_hexint;
extern int param_set_hexint(const char *val, const struct kernel_param *kp);
extern int param_get_hexint(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_charp;
extern int param_set_charp(const char *val, const struct kernel_param *kp);
extern int param_get_charp(char *buffer, const struct kernel_param *kp);
extern void param_free_charp(void *arg);



extern const struct kernel_param_ops param_ops_bool;
extern int param_set_bool(const char *val, const struct kernel_param *kp);
extern int param_get_bool(char *buffer, const struct kernel_param *kp);


extern const struct kernel_param_ops param_ops_bool_enable_only;
extern int param_set_bool_enable_only(const char *val,
          const struct kernel_param *kp);



extern const struct kernel_param_ops param_ops_invbool;
extern int param_set_invbool(const char *val, const struct kernel_param *kp);
extern int param_get_invbool(char *buffer, const struct kernel_param *kp);



extern const struct kernel_param_ops param_ops_bint;
extern int param_set_bint(const char *val, const struct kernel_param *kp);
# 530 "../include/linux/moduleparam.h"
enum hwparam_type {
 hwparam_ioport,
 hwparam_iomem,
 hwparam_ioport_or_iomem,
 hwparam_irq,
 hwparam_dma,
 hwparam_dma_addr,
 hwparam_other,
};
# 591 "../include/linux/moduleparam.h"
extern const struct kernel_param_ops param_array_ops;

extern const struct kernel_param_ops param_ops_string;
extern int param_set_copystring(const char *val, const struct kernel_param *);
extern int param_get_string(char *buffer, const struct kernel_param *kp);



struct module;
# 608 "../include/linux/moduleparam.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int module_param_sysfs_setup(struct module *mod,
        const struct kernel_param *kparam,
        unsigned int num_params)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void module_param_sysfs_remove(struct module *mod)
{ }
# 23 "../include/linux/module.h" 2


# 1 "../include/linux/rbtree_latch.h" 1
# 40 "../include/linux/rbtree_latch.h"
struct latch_tree_node {
 struct rb_node node[2];
};

struct latch_tree_root {
 seqcount_latch_t seq;
 struct rb_root tree[2];
};
# 64 "../include/linux/rbtree_latch.h"
struct latch_tree_ops {
 bool (*less)(struct latch_tree_node *a, struct latch_tree_node *b);
 int (*comp)(void *key, struct latch_tree_node *b);
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct latch_tree_node *
__lt_from_rb(struct rb_node *node, int idx)
{
 return ({ void *__mptr = (void *)(node); _Static_assert(__builtin_types_compatible_p(typeof(*(node)), typeof(((struct latch_tree_node *)0)->node[idx])) || __builtin_types_compatible_p(typeof(*(node)), typeof(void)), "pointer type mismatch in container_of()"); ((struct latch_tree_node *)(__mptr - __builtin_offsetof(struct latch_tree_node, node[idx]))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
__lt_insert(struct latch_tree_node *ltn, struct latch_tree_root *ltr, int idx,
     bool (*less)(struct latch_tree_node *a, struct latch_tree_node *b))
{
 struct rb_root *root = &ltr->tree[idx];
 struct rb_node **link = &root->rb_node;
 struct rb_node *node = &ltn->node[idx];
 struct rb_node *parent = ((void *)0);
 struct latch_tree_node *ltp;

 while (*link) {
  parent = *link;
  ltp = __lt_from_rb(parent, idx);

  if (less(ltn, ltp))
   link = &parent->rb_left;
  else
   link = &parent->rb_right;
 }

 rb_link_node_rcu(node, parent, link);
 rb_insert_color(node, root);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
__lt_erase(struct latch_tree_node *ltn, struct latch_tree_root *ltr, int idx)
{
 rb_erase(&ltn->node[idx], &ltr->tree[idx]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct latch_tree_node *
__lt_find(void *key, struct latch_tree_root *ltr, int idx,
   int (*comp)(void *key, struct latch_tree_node *node))
{
 struct rb_node *node = ({ typeof(ltr->tree[idx].rb_node) __UNIQUE_ID_rcu152 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_153(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ltr->tree[idx].rb_node) == sizeof(char) || sizeof(ltr->tree[idx].rb_node) == sizeof(short) || sizeof(ltr->tree[idx].rb_node) == sizeof(int) || sizeof(ltr->tree[idx].rb_node) == sizeof(long)) || sizeof(ltr->tree[idx].rb_node) == sizeof(long long))) __compiletime_assert_153(); } while (0); (*(const volatile typeof( _Generic((ltr->tree[idx].rb_node), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ltr->tree[idx].rb_node))) *)&(ltr->tree[idx].rb_node)); }); ((typeof(*ltr->tree[idx].rb_node) *)(__UNIQUE_ID_rcu152)); });
 struct latch_tree_node *ltn;
 int c;

 while (node) {
  ltn = __lt_from_rb(node, idx);
  c = comp(key, ltn);

  if (c < 0)
   node = ({ typeof(node->rb_left) __UNIQUE_ID_rcu154 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_155(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(node->rb_left) == sizeof(char) || sizeof(node->rb_left) == sizeof(short) || sizeof(node->rb_left) == sizeof(int) || sizeof(node->rb_left) == sizeof(long)) || sizeof(node->rb_left) == sizeof(long long))) __compiletime_assert_155(); } while (0); (*(const volatile typeof( _Generic((node->rb_left), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (node->rb_left))) *)&(node->rb_left)); }); ((typeof(*node->rb_left) *)(__UNIQUE_ID_rcu154)); });
  else if (c > 0)
   node = ({ typeof(node->rb_right) __UNIQUE_ID_rcu156 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_157(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(node->rb_right) == sizeof(char) || sizeof(node->rb_right) == sizeof(short) || sizeof(node->rb_right) == sizeof(int) || sizeof(node->rb_right) == sizeof(long)) || sizeof(node->rb_right) == sizeof(long long))) __compiletime_assert_157(); } while (0); (*(const volatile typeof( _Generic((node->rb_right), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (node->rb_right))) *)&(node->rb_right)); }); ((typeof(*node->rb_right) *)(__UNIQUE_ID_rcu156)); });
  else
   return ltn;
 }

 return ((void *)0);
}
# 143 "../include/linux/rbtree_latch.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
latch_tree_insert(struct latch_tree_node *node,
    struct latch_tree_root *root,
    const struct latch_tree_ops *ops)
{
 raw_write_seqcount_latch(&root->seq);
 __lt_insert(node, root, 0, ops->less);
 raw_write_seqcount_latch(&root->seq);
 __lt_insert(node, root, 1, ops->less);
}
# 170 "../include/linux/rbtree_latch.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
latch_tree_erase(struct latch_tree_node *node,
   struct latch_tree_root *root,
   const struct latch_tree_ops *ops)
{
 raw_write_seqcount_latch(&root->seq);
 __lt_erase(node, root, 0);
 raw_write_seqcount_latch(&root->seq);
 __lt_erase(node, root, 1);
}
# 199 "../include/linux/rbtree_latch.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) struct latch_tree_node *
latch_tree_find(void *key, struct latch_tree_root *root,
  const struct latch_tree_ops *ops)
{
 struct latch_tree_node *node;
 unsigned int seq;

 do {
  seq = raw_read_seqcount_latch(&root->seq);
  node = __lt_find(key, root, seq & 1, ops->comp);
 } while (raw_read_seqcount_latch_retry(&root->seq, seq));

 return node;
}
# 26 "../include/linux/module.h" 2
# 1 "../include/linux/error-injection.h" 1






# 1 "../include/asm-generic/error-injection.h" 1





enum {
 EI_ETYPE_NULL,
 EI_ETYPE_ERRNO,
 EI_ETYPE_ERRNO_NULL,
 EI_ETYPE_TRUE,
};

struct error_injection_entry {
 unsigned long addr;
 int etype;
};

struct pt_regs;
# 39 "../include/asm-generic/error-injection.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void override_function_with_return(struct pt_regs *regs) { }
# 8 "../include/linux/error-injection.h" 2








static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool within_error_injection_list(unsigned long addr)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_injectable_error_type(unsigned long addr)
{
 return -95;
}
# 27 "../include/linux/module.h" 2
# 1 "../include/linux/tracepoint-defs.h" 1
# 12 "../include/linux/tracepoint-defs.h"
# 1 "../include/linux/static_key.h" 1
# 13 "../include/linux/tracepoint-defs.h" 2

struct static_call_key;

struct trace_print_flags {
 unsigned long mask;
 const char *name;
};

struct trace_print_flags_u64 {
 unsigned long long mask;
 const char *name;
};

struct tracepoint_func {
 void *func;
 void *data;
 int prio;
};

struct tracepoint {
 const char *name;
 struct static_key key;
 struct static_call_key *static_call_key;
 void *static_call_tramp;
 void *iterator;
 void *probestub;
 int (*regfunc)(void);
 void (*unregfunc)(void);
 struct tracepoint_func *funcs;
};




typedef struct tracepoint * const tracepoint_ptr_t;


struct bpf_raw_event_map {
 struct tracepoint *tp;
 void *bpf_func;
 u32 num_args;
 u32 writable_size;
} __attribute__((__aligned__(32)));
# 28 "../include/linux/module.h" 2


# 1 "../include/linux/dynamic_debug.h" 1
# 16 "../include/linux/dynamic_debug.h"
struct _ddebug {




 const char *modname;
 const char *function;
 const char *filename;
 const char *format;
 unsigned int lineno:18;

 unsigned int class_id:6;
# 52 "../include/linux/dynamic_debug.h"
 unsigned int flags:8;






} __attribute__((aligned(8)));

enum class_map_type {
 DD_CLASS_TYPE_DISJOINT_BITS,




 DD_CLASS_TYPE_LEVEL_NUM,




 DD_CLASS_TYPE_DISJOINT_NAMES,




 DD_CLASS_TYPE_LEVEL_NAMES,





};

struct ddebug_class_map {
 struct list_head link;
 struct module *mod;
 const char *mod_name;
 const char **class_names;
 const int length;
 const int base;
 enum class_map_type map_type;
};
# 117 "../include/linux/dynamic_debug.h"
struct _ddebug_info {
 struct _ddebug *descs;
 struct ddebug_class_map *classes;
 unsigned int num_descs;
 unsigned int num_classes;
};

struct ddebug_class_param {
 union {
  unsigned long *bits;
  unsigned int *lvl;
 };
 char flags[8];
 const struct ddebug_class_map *map;
};
# 331 "../include/linux/dynamic_debug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ddebug_dyndbg_module_param_cb(char *param, char *val,
      const char *modname)
{
 if (!strcmp(param, "dyndbg")) {

  ({ do {} while (0); _printk("\001" "4" "dyndbg param is supported only in " "CONFIG_DYNAMIC_DEBUG builds\n"); });

  return 0;
 }
 return -22;
}

struct kernel_param;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int param_set_dyndbg_classes(const char *instr, const struct kernel_param *kp)
{ return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int param_get_dyndbg_classes(char *buffer, const struct kernel_param *kp)
{ return 0; }




extern const struct kernel_param_ops param_ops_dyndbg_classes;
# 31 "../include/linux/module.h" 2


# 1 "./arch/hexagon/include/generated/asm/module.h" 1
# 1 "../include/asm-generic/module.h" 1
# 10 "../include/asm-generic/module.h"
struct mod_arch_specific
{
};
# 2 "./arch/hexagon/include/generated/asm/module.h" 2
# 34 "../include/linux/module.h" 2



struct modversion_info {
 unsigned long crc;
 char name[(64 - sizeof(unsigned long))];
};

struct module;
struct exception_table_entry;

struct module_kobject {
 struct kobject kobj;
 struct module *mod;
 struct kobject *drivers_dir;
 struct module_param_attrs *mp;
 struct completion *kobj_completion;
} ;

struct module_attribute {
 struct attribute attr;
 ssize_t (*show)(struct module_attribute *, struct module_kobject *,
   char *);
 ssize_t (*store)(struct module_attribute *, struct module_kobject *,
    const char *, size_t count);
 void (*setup)(struct module *, const char *);
 int (*test)(struct module *);
 void (*free)(struct module *);
};

struct module_version_attribute {
 struct module_attribute mattr;
 const char *module_name;
 const char *version;
};

extern ssize_t __modver_version_show(struct module_attribute *,
         struct module_kobject *, char *);

extern struct module_attribute module_uevent;


extern int init_module(void);
extern void cleanup_module(void);
# 301 "../include/linux/module.h"
struct notifier_block;
# 772 "../include/linux/module.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct module *__module_address(unsigned long addr)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct module *__module_text_address(unsigned long addr)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_module_address(unsigned long addr)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_module_percpu_address(unsigned long addr)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_module_text_address(unsigned long addr)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool within_module_core(unsigned long addr,
          const struct module *mod)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool within_module_init(unsigned long addr,
          const struct module *mod)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool within_module(unsigned long addr, const struct module *mod)
{
 return false;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __module_get(struct module *module)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool try_module_get(struct module *module)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void module_put(struct module *module)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_module_notifier(struct notifier_block *nb)
{

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unregister_module_notifier(struct notifier_block *nb)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_modules(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool module_requested_async_probing(struct module *module)
{
 return false;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_module_sig_enforced(void)
{
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void *dereference_module_function_descriptor(struct module *mod, void *ptr)
{
 return ptr;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool module_is_coming(struct module *mod)
{
 return false;
}



extern struct kset *module_kset;
extern const struct kobj_type module_ktype;
# 891 "../include/linux/module.h"
void module_bug_finalize(const Elf32_Ehdr *, const Elf32_Shdr *,
    struct module *);
void module_bug_cleanup(struct module *);
# 908 "../include/linux/module.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool retpoline_module_ok(bool has_retpoline)
{
 return true;
}
# 922 "../include/linux/module.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_module_sig_enforced(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool module_sig_ok(struct module *module)
{
 return true;
}
# 967 "../include/linux/module.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int module_kallsyms_on_each_symbol(const char *modname,
       int (*fn)(void *, const char *, unsigned long),
       void *data)
{
 return -95;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int module_address_lookup(unsigned long addr,
      unsigned long *symbolsize,
      unsigned long *offset,
      char **modname,
      const unsigned char **modbuildid,
      char *namebuf)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int lookup_module_symbol_name(unsigned long addr, char *symname)
{
 return -34;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int module_get_kallsym(unsigned int symnum, unsigned long *value,
         char *type, char *name,
         char *module_name, int *exported)
{
 return -34;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long module_kallsyms_lookup_name(const char *name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long find_kallsyms_symbol_value(struct module *mod,
             const char *name)
{
 return 0;
}
# 22 "../include/linux/device/driver.h" 2
# 45 "../include/linux/device/driver.h"
enum probe_type {
 PROBE_DEFAULT_STRATEGY,
 PROBE_PREFER_ASYNCHRONOUS,
 PROBE_FORCE_SYNCHRONOUS,
};
# 96 "../include/linux/device/driver.h"
struct device_driver {
 const char *name;
 const struct bus_type *bus;

 struct module *owner;
 const char *mod_name;

 bool suppress_bind_attrs;
 enum probe_type probe_type;

 const struct of_device_id *of_match_table;
 const struct acpi_device_id *acpi_match_table;

 int (*probe) (struct device *dev);
 void (*sync_state)(struct device *dev);
 int (*remove) (struct device *dev);
 void (*shutdown) (struct device *dev);
 int (*suspend) (struct device *dev, pm_message_t state);
 int (*resume) (struct device *dev);
 const struct attribute_group **groups;
 const struct attribute_group **dev_groups;

 const struct dev_pm_ops *pm;
 void (*coredump) (struct device *dev);

 struct driver_private *p;
};


int __attribute__((__warn_unused_result__)) driver_register(struct device_driver *drv);
void driver_unregister(struct device_driver *drv);

struct device_driver *driver_find(const char *name, const struct bus_type *bus);
bool __attribute__((__section__(".init.text"))) __attribute__((__cold__)) driver_probe_done(void);
void wait_for_device_probe(void);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) wait_for_init_devices_probe(void);



struct driver_attribute {
 struct attribute attr;
 ssize_t (*show)(struct device_driver *driver, char *buf);
 ssize_t (*store)(struct device_driver *driver, const char *buf,
    size_t count);
};
# 149 "../include/linux/device/driver.h"
int __attribute__((__warn_unused_result__)) driver_create_file(const struct device_driver *driver,
        const struct driver_attribute *attr);
void driver_remove_file(const struct device_driver *driver,
   const struct driver_attribute *attr);

int driver_set_override(struct device *dev, const char **override,
   const char *s, size_t len);
int __attribute__((__warn_unused_result__)) driver_for_each_device(struct device_driver *drv, struct device *start,
     void *data, int (*fn)(struct device *dev, void *));
struct device *driver_find_device(const struct device_driver *drv,
      struct device *start, const void *data,
      int (*match)(struct device *dev, const void *data));







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *driver_find_device_by_name(const struct device_driver *drv,
       const char *name)
{
 return driver_find_device(drv, ((void *)0), name, device_match_name);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
driver_find_device_by_of_node(const struct device_driver *drv,
         const struct device_node *np)
{
 return driver_find_device(drv, ((void *)0), np, device_match_of_node);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
driver_find_device_by_fwnode(struct device_driver *drv,
        const struct fwnode_handle *fwnode)
{
 return driver_find_device(drv, ((void *)0), fwnode, device_match_fwnode);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *driver_find_device_by_devt(const struct device_driver *drv,
       dev_t devt)
{
 return driver_find_device(drv, ((void *)0), &devt, device_match_devt);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *driver_find_next_device(const struct device_driver *drv,
           struct device *start)
{
 return driver_find_device(drv, start, ((void *)0), device_match_any);
}
# 232 "../include/linux/device/driver.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device *
driver_find_device_by_acpi_dev(const struct device_driver *drv, const void *adev)
{
 return ((void *)0);
}


void driver_deferred_probe_add(struct device *dev);
int driver_deferred_probe_check_state(struct device *dev);
void driver_init(void);
# 33 "../include/linux/device.h" 2

# 1 "./arch/hexagon/include/generated/asm/device.h" 1
# 1 "../include/asm-generic/device.h" 1







struct dev_archdata {
};

struct pdev_archdata {
};
# 2 "./arch/hexagon/include/generated/asm/device.h" 2
# 35 "../include/linux/device.h" 2

struct device;
struct device_private;
struct device_driver;
struct driver_private;
struct module;
struct class;
struct subsys_private;
struct device_node;
struct fwnode_handle;
struct iommu_group;
struct dev_pin_info;
struct dev_iommu;
struct msi_device_data;
# 63 "../include/linux/device.h"
struct subsys_interface {
 const char *name;
 const struct bus_type *subsys;
 struct list_head node;
 int (*add_dev)(struct device *dev, struct subsys_interface *sif);
 void (*remove_dev)(struct device *dev, struct subsys_interface *sif);
};

int subsys_interface_register(struct subsys_interface *sif);
void subsys_interface_unregister(struct subsys_interface *sif);

int subsys_system_register(const struct bus_type *subsys,
      const struct attribute_group **groups);
int subsys_virtual_register(const struct bus_type *subsys,
       const struct attribute_group **groups);
# 88 "../include/linux/device.h"
struct device_type {
 const char *name;
 const struct attribute_group **groups;
 int (*uevent)(const struct device *dev, struct kobj_uevent_env *env);
 char *(*devnode)(const struct device *dev, umode_t *mode,
    kuid_t *uid, kgid_t *gid);
 void (*release)(struct device *dev);

 const struct dev_pm_ops *pm;
};







struct device_attribute {
 struct attribute attr;
 ssize_t (*show)(struct device *dev, struct device_attribute *attr,
   char *buf);
 ssize_t (*store)(struct device *dev, struct device_attribute *attr,
    const char *buf, size_t count);
};






struct dev_ext_attribute {
 struct device_attribute attr;
 void *var;
};

ssize_t device_show_ulong(struct device *dev, struct device_attribute *attr,
     char *buf);
ssize_t device_store_ulong(struct device *dev, struct device_attribute *attr,
      const char *buf, size_t count);
ssize_t device_show_int(struct device *dev, struct device_attribute *attr,
   char *buf);
ssize_t device_store_int(struct device *dev, struct device_attribute *attr,
    const char *buf, size_t count);
ssize_t device_show_bool(struct device *dev, struct device_attribute *attr,
   char *buf);
ssize_t device_store_bool(struct device *dev, struct device_attribute *attr,
    const char *buf, size_t count);
ssize_t device_show_string(struct device *dev, struct device_attribute *attr,
      char *buf);
# 273 "../include/linux/device.h"
int device_create_file(struct device *device,
         const struct device_attribute *entry);
void device_remove_file(struct device *dev,
   const struct device_attribute *attr);
bool device_remove_file_self(struct device *dev,
        const struct device_attribute *attr);
int __attribute__((__warn_unused_result__)) device_create_bin_file(struct device *dev,
     const struct bin_attribute *attr);
void device_remove_bin_file(struct device *dev,
       const struct bin_attribute *attr);


typedef void (*dr_release_t)(struct device *dev, void *res);
typedef int (*dr_match_t)(struct device *dev, void *res, void *match_data);

void *__devres_alloc_node(dr_release_t release, size_t size, gfp_t gfp,
     int nid, const char *name) __attribute__((__malloc__));





void devres_for_each_res(struct device *dev, dr_release_t release,
    dr_match_t match, void *match_data,
    void (*fn)(struct device *, void *, void *),
    void *data);
void devres_free(void *res);
void devres_add(struct device *dev, void *res);
void *devres_find(struct device *dev, dr_release_t release,
    dr_match_t match, void *match_data);
void *devres_get(struct device *dev, void *new_res,
   dr_match_t match, void *match_data);
void *devres_remove(struct device *dev, dr_release_t release,
      dr_match_t match, void *match_data);
int devres_destroy(struct device *dev, dr_release_t release,
     dr_match_t match, void *match_data);
int devres_release(struct device *dev, dr_release_t release,
     dr_match_t match, void *match_data);


void * __attribute__((__warn_unused_result__)) devres_open_group(struct device *dev, void *id, gfp_t gfp);
void devres_close_group(struct device *dev, void *id);
void devres_remove_group(struct device *dev, void *id);
int devres_release_group(struct device *dev, void *id);


void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __attribute__((__alloc_size__(2))) __attribute__((__malloc__));
void *devm_krealloc(struct device *dev, void *ptr, size_t size,
      gfp_t gfp) __attribute__((__warn_unused_result__)) __attribute__((__alloc_size__(3)));
__attribute__((__format__(printf, 3, 0))) char *devm_kvasprintf(struct device *dev, gfp_t gfp,
         const char *fmt, va_list ap) __attribute__((__malloc__));
__attribute__((__format__(printf, 3, 4))) char *devm_kasprintf(struct device *dev, gfp_t gfp,
        const char *fmt, ...) __attribute__((__malloc__));
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *devm_kzalloc(struct device *dev, size_t size, gfp_t gfp)
{
 return devm_kmalloc(dev, size, gfp | (( gfp_t)((((1UL))) << (___GFP_ZERO_BIT))));
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *devm_kmalloc_array(struct device *dev,
           size_t n, size_t size, gfp_t flags)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(n, size, &bytes))), 0))
  return ((void *)0);

 return devm_kmalloc(dev, bytes, flags);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *devm_kcalloc(struct device *dev,
     size_t n, size_t size, gfp_t flags)
{
 return devm_kmalloc_array(dev, n, size, flags | (( gfp_t)((((1UL))) << (___GFP_ZERO_BIT))));
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(3, 4))) void * __attribute__((__warn_unused_result__))
devm_krealloc_array(struct device *dev, void *p, size_t new_n, size_t new_size, gfp_t flags)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(new_n, new_size, &bytes))), 0))
  return ((void *)0);

 return devm_krealloc(dev, p, bytes, flags);
}

void devm_kfree(struct device *dev, const void *p);
char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp) __attribute__((__malloc__));
const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp);
void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp)
 __attribute__((__alloc_size__(3)));

unsigned long devm_get_free_pages(struct device *dev,
      gfp_t gfp_mask, unsigned int order);
void devm_free_pages(struct device *dev, unsigned long addr);


void *devm_ioremap_resource(struct device *dev,
        const struct resource *res);
void *devm_ioremap_resource_wc(struct device *dev,
           const struct resource *res);

void *devm_of_iomap(struct device *dev,
       struct device_node *node, int index,
       resource_size_t *size);
# 402 "../include/linux/device.h"
void devm_remove_action(struct device *dev, void (*action)(void *), void *data);
void devm_release_action(struct device *dev, void (*action)(void *), void *data);

int __devm_add_action(struct device *dev, void (*action)(void *), void *data, const char *name);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __devm_add_action_or_reset(struct device *dev, void (*action)(void *),
          void *data, const char *name)
{
 int ret;

 ret = __devm_add_action(dev, action, data, name);
 if (ret)
  action(data);

 return ret;
}
# 438 "../include/linux/device.h"
void *__devm_alloc_percpu(struct device *dev, size_t size,
       size_t align);
void devm_free_percpu(struct device *dev, void *pdata);

struct device_dma_parameters {




 unsigned int max_segment_size;
 unsigned int min_align_mask;
 unsigned long segment_boundary_mask;
};
# 461 "../include/linux/device.h"
enum device_link_state {
 DL_STATE_NONE = -1,
 DL_STATE_DORMANT = 0,
 DL_STATE_AVAILABLE,
 DL_STATE_CONSUMER_PROBE,
 DL_STATE_ACTIVE,
 DL_STATE_SUPPLIER_UNBIND,
};
# 501 "../include/linux/device.h"
enum dl_dev_state {
 DL_DEV_NO_DRIVER = 0,
 DL_DEV_PROBING,
 DL_DEV_DRIVER_BOUND,
 DL_DEV_UNBINDING,
};
# 517 "../include/linux/device.h"
enum device_removable {
 DEVICE_REMOVABLE_NOT_SUPPORTED = 0,
 DEVICE_REMOVABLE_UNKNOWN,
 DEVICE_FIXED,
 DEVICE_REMOVABLE,
};
# 531 "../include/linux/device.h"
struct dev_links_info {
 struct list_head suppliers;
 struct list_head consumers;
 struct list_head defer_sync;
 enum dl_dev_state status;
};






struct dev_msi_info {




};
# 561 "../include/linux/device.h"
enum device_physical_location_panel {
 DEVICE_PANEL_TOP,
 DEVICE_PANEL_BOTTOM,
 DEVICE_PANEL_LEFT,
 DEVICE_PANEL_RIGHT,
 DEVICE_PANEL_FRONT,
 DEVICE_PANEL_BACK,
 DEVICE_PANEL_UNKNOWN,
};
# 578 "../include/linux/device.h"
enum device_physical_location_vertical_position {
 DEVICE_VERT_POS_UPPER,
 DEVICE_VERT_POS_CENTER,
 DEVICE_VERT_POS_LOWER,
};
# 591 "../include/linux/device.h"
enum device_physical_location_horizontal_position {
 DEVICE_HORI_POS_LEFT,
 DEVICE_HORI_POS_CENTER,
 DEVICE_HORI_POS_RIGHT,
};
# 611 "../include/linux/device.h"
struct device_physical_location {
 enum device_physical_location_panel panel;
 enum device_physical_location_vertical_position vertical_position;
 enum device_physical_location_horizontal_position horizontal_position;
 bool dock;
 bool lid;
};
# 719 "../include/linux/device.h"
struct device {
 struct kobject kobj;
 struct device *parent;

 struct device_private *p;

 const char *init_name;
 const struct device_type *type;

 const struct bus_type *bus;
 struct device_driver *driver;

 void *platform_data;

 void *driver_data;

 struct mutex mutex;



 struct dev_links_info links;
 struct dev_pm_info power;
 struct dev_pm_domain *pm_domain;
# 750 "../include/linux/device.h"
 struct dev_msi_info msi;



 u64 *dma_mask;
 u64 coherent_dma_mask;




 u64 bus_dma_limit;
 const struct bus_dma_region *dma_range_map;

 struct device_dma_parameters *dma_parms;

 struct list_head dma_pools;


 struct dma_coherent_mem *dma_mem;
# 784 "../include/linux/device.h"
 struct dev_archdata archdata;

 struct device_node *of_node;
 struct fwnode_handle *fwnode;




 dev_t devt;
 u32 id;

 spinlock_t devres_lock;
 struct list_head devres_head;

 const struct class *class;
 const struct attribute_group **groups;

 void (*release)(struct device *dev);
 struct iommu_group *iommu_group;
 struct dev_iommu *iommu;

 struct device_physical_location *physical_location;

 enum device_removable removable;

 bool offline_disabled:1;
 bool offline:1;
 bool of_node_reused:1;
 bool state_synced:1;
 bool can_match:1;



 bool dma_coherent:1;





 bool dma_skip_sync:1;

};
# 841 "../include/linux/device.h"
struct device_link {
 struct device *supplier;
 struct list_head s_node;
 struct device *consumer;
 struct list_head c_node;
 struct device link_dev;
 enum device_link_state status;
 u32 flags;
 refcount_t rpm_active;
 struct kref kref;
 struct work_struct rm_work;
 bool supplier_preactivated;
};
# 862 "../include/linux/device.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_iommu_mapped(struct device *dev)
{
 return (dev->iommu_group != ((void *)0));
}


# 1 "../include/linux/pm_wakeup.h" 1
# 18 "../include/linux/pm_wakeup.h"
struct wake_irq;
# 43 "../include/linux/pm_wakeup.h"
struct wakeup_source {
 const char *name;
 int id;
 struct list_head entry;
 spinlock_t lock;
 struct wake_irq *wakeirq;
 struct timer_list timer;
 unsigned long timer_expires;
 ktime_t total_time;
 ktime_t max_time;
 ktime_t last_time;
 ktime_t start_prevent_time;
 ktime_t prevent_sleep_time;
 unsigned long event_count;
 unsigned long active_count;
 unsigned long relax_count;
 unsigned long expire_count;
 unsigned long wakeup_count;
 struct device *dev;
 bool active:1;
 bool autosleep_enabled:1;
};
# 122 "../include/linux/pm_wakeup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_set_wakeup_capable(struct device *dev, bool capable)
{
 dev->power.can_wakeup = capable;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_can_wakeup(struct device *dev)
{
 return dev->power.can_wakeup;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct wakeup_source *wakeup_source_create(const char *name)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wakeup_source_destroy(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wakeup_source_add(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wakeup_source_remove(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct wakeup_source *wakeup_source_register(struct device *dev,
          const char *name)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wakeup_source_unregister(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_wakeup_enable(struct device *dev)
{
 dev->power.should_wakeup = true;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_wakeup_disable(struct device *dev)
{
 dev->power.should_wakeup = false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_set_wakeup_enable(struct device *dev, bool enable)
{
 dev->power.should_wakeup = enable;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_may_wakeup(struct device *dev)
{
 return dev->power.can_wakeup && dev->power.should_wakeup;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_wakeup_path(struct device *dev)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_set_wakeup_path(struct device *dev) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __pm_stay_awake(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_stay_awake(struct device *dev) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __pm_relax(struct wakeup_source *ws) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_relax(struct device *dev) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_wakeup_ws_event(struct wakeup_source *ws,
          unsigned int msec, bool hard) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_wakeup_dev_event(struct device *dev, unsigned int msec,
           bool hard) {}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_awake_path(struct device *dev)
{
 return device_wakeup_path(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_set_awake_path(struct device *dev)
{
 device_set_wakeup_path(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec)
{
 return pm_wakeup_ws_event(ws, msec, false);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_wakeup_event(struct device *dev, unsigned int msec)
{
 return pm_wakeup_dev_event(dev, msec, false);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pm_wakeup_hard_event(struct device *dev)
{
 return pm_wakeup_dev_event(dev, 0, true);
}
# 232 "../include/linux/pm_wakeup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_init_wakeup(struct device *dev, bool enable)
{
 if (enable) {
  device_set_wakeup_capable(dev, true);
  return device_wakeup_enable(dev);
 }
 device_wakeup_disable(dev);
 device_set_wakeup_capable(dev, false);
 return 0;
}
# 869 "../include/linux/device.h" 2






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *dev_name(const struct device *dev)
{

 if (dev->init_name)
  return dev->init_name;

 return kobject_name(&dev->kobj);
}
# 891 "../include/linux/device.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *dev_bus_name(const struct device *dev)
{
 return dev->bus ? dev->bus->name : (dev->class ? dev->class->name : "");
}

__attribute__((__format__(printf, 2, 3))) int dev_set_name(struct device *dev, const char *name, ...);
# 908 "../include/linux/device.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_to_node(struct device *dev)
{
 return (-1);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_dev_node(struct device *dev, int node)
{
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_domain *dev_get_msi_domain(const struct device *dev)
{



 return ((void *)0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_set_msi_domain(struct device *dev, struct irq_domain *d)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dev_get_drvdata(const struct device *dev)
{
 return dev->driver_data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_set_drvdata(struct device *dev, void *data)
{
 dev->driver_data = data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pm_subsys_data *dev_to_psd(struct device *dev)
{
 return dev ? dev->power.subsys_data : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int dev_get_uevent_suppress(const struct device *dev)
{
 return dev->kobj.uevent_suppress;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_set_uevent_suppress(struct device *dev, int val)
{
 dev->kobj.uevent_suppress = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_is_registered(struct device *dev)
{
 return dev->kobj.state_in_sysfs;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_enable_async_suspend(struct device *dev)
{
 if (!dev->power.is_prepared)
  dev->power.async_suspend = true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_disable_async_suspend(struct device *dev)
{
 if (!dev->power.is_prepared)
  dev->power.async_suspend = false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_async_suspend_enabled(struct device *dev)
{
 return !!dev->power.async_suspend;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_pm_not_required(struct device *dev)
{
 return dev->power.no_pm;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_set_pm_not_required(struct device *dev)
{
 dev->power.no_pm = true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_pm_syscore_device(struct device *dev, bool val)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_pm_set_driver_flags(struct device *dev, u32 flags)
{
 dev->power.driver_flags = flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_pm_test_driver_flags(struct device *dev, u32 flags)
{
 return !!(dev->power.driver_flags & flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_lock(struct device *dev)
{
 mutex_lock_nested(&dev->mutex, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_lock_interruptible(struct device *dev)
{
 return mutex_lock_interruptible_nested(&dev->mutex, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int device_trylock(struct device *dev)
{
 return mutex_trylock(&dev->mutex);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_unlock(struct device *dev)
{
 mutex_unlock(&dev->mutex);
}

typedef struct device * class_device_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_device_destructor(struct device * *p) { struct device * _T = *p; if (_T) { device_unlock(_T); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device * class_device_constructor(struct device * _T) { struct device * t = ({ device_lock(_T); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_device_lock_ptr(class_device_t *_T) { return *_T; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_lock_assert(struct device *dev)
{
 do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(&dev->mutex)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/device.h", 1031, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_has_sync_state(struct device *dev)
{
 if (!dev)
  return false;
 if (dev->driver && dev->driver->sync_state)
  return true;
 if (dev->bus && dev->bus->sync_state)
  return true;
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_set_removable(struct device *dev,
         enum device_removable removable)
{
 dev->removable = removable;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_is_removable(struct device *dev)
{
 return dev->removable == DEVICE_REMOVABLE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_removable_is_valid(struct device *dev)
{
 return dev->removable != DEVICE_REMOVABLE_NOT_SUPPORTED;
}




int __attribute__((__warn_unused_result__)) device_register(struct device *dev);
void device_unregister(struct device *dev);
void device_initialize(struct device *dev);
int __attribute__((__warn_unused_result__)) device_add(struct device *dev);
void device_del(struct device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_device_del(void *p) { struct device * _T = *(struct device * *)p; if (_T) device_del(_T); }

int device_for_each_child(struct device *dev, void *data,
     int (*fn)(struct device *dev, void *data));
int device_for_each_child_reverse(struct device *dev, void *data,
      int (*fn)(struct device *dev, void *data));
struct device *device_find_child(struct device *dev, void *data,
     int (*match)(struct device *dev, void *data));
struct device *device_find_child_by_name(struct device *parent,
      const char *name);
struct device *device_find_any_child(struct device *parent);

int device_rename(struct device *dev, const char *new_name);
int device_move(struct device *dev, struct device *new_parent,
  enum dpm_order dpm_order);
int device_change_owner(struct device *dev, kuid_t kuid, kgid_t kgid);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool device_supports_offline(struct device *dev)
{
 return dev->bus && dev->bus->offline && dev->bus->online;
}
# 1135 "../include/linux/device.h"
void lock_device_hotplug(void);
void unlock_device_hotplug(void);
int lock_device_hotplug_sysfs(void);
int device_offline(struct device *dev);
int device_online(struct device *dev);

void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode);
void set_secondary_fwnode(struct device *dev, struct fwnode_handle *fwnode);
void device_set_node(struct device *dev, struct fwnode_handle *fwnode);
void device_set_of_node_from_dev(struct device *dev, const struct device *dev2);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct device_node *dev_of_node(struct device *dev)
{
 if (!1 || !dev)
  return ((void *)0);
 return dev->of_node;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_num_vf(struct device *dev)
{
 if (dev->bus && dev->bus->num_vf)
  return dev->bus->num_vf(dev);
 return 0;
}




struct device *__root_device_register(const char *name, struct module *owner);





void root_device_unregister(struct device *root);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dev_get_platdata(const struct device *dev)
{
 return dev->platform_data;
}





int __attribute__((__warn_unused_result__)) device_driver_attach(const struct device_driver *drv,
          struct device *dev);
int __attribute__((__warn_unused_result__)) device_bind_driver(struct device *dev);
void device_release_driver(struct device *dev);
int __attribute__((__warn_unused_result__)) device_attach(struct device *dev);
int __attribute__((__warn_unused_result__)) driver_attach(const struct device_driver *drv);
void device_initial_probe(struct device *dev);
int __attribute__((__warn_unused_result__)) device_reprobe(struct device *dev);

bool device_is_bound(struct device *dev);




__attribute__((__format__(printf, 5, 6))) struct device *
device_create(const struct class *cls, struct device *parent, dev_t devt,
       void *drvdata, const char *fmt, ...);
__attribute__((__format__(printf, 6, 7))) struct device *
device_create_with_groups(const struct class *cls, struct device *parent, dev_t devt,
     void *drvdata, const struct attribute_group **groups,
     const char *fmt, ...);
void device_destroy(const struct class *cls, dev_t devt);

int __attribute__((__warn_unused_result__)) device_add_groups(struct device *dev,
       const struct attribute_group **groups);
void device_remove_groups(struct device *dev,
     const struct attribute_group **groups);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) device_add_group(struct device *dev,
     const struct attribute_group *grp)
{
 const struct attribute_group *groups[] = { grp, ((void *)0) };

 return device_add_groups(dev, groups);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void device_remove_group(struct device *dev,
           const struct attribute_group *grp)
{
 const struct attribute_group *groups[] = { grp, ((void *)0) };

 return device_remove_groups(dev, groups);
}

int __attribute__((__warn_unused_result__)) devm_device_add_group(struct device *dev,
           const struct attribute_group *grp);





struct device *get_device(struct device *dev);
void put_device(struct device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_put_device(void *p) { struct device * _T = *(struct device * *)p; if (_T) put_device(_T); }

bool kill_device(struct device *dev);


int devtmpfs_mount(void);





void device_shutdown(void);


const char *dev_driver_string(const struct device *dev);


struct device_link *device_link_add(struct device *consumer,
        struct device *supplier, u32 flags);
void device_link_del(struct device_link *link);
void device_link_remove(void *consumer, struct device *supplier);
void device_links_supplier_sync_state_pause(void);
void device_links_supplier_sync_state_resume(void);
void device_link_wait_removal(void);
# 9 "../include/linux/cdev.h" 2

struct file_operations;
struct inode;
struct module;

struct cdev {
 struct kobject kobj;
 struct module *owner;
 const struct file_operations *ops;
 struct list_head list;
 dev_t dev;
 unsigned int count;
} ;

void cdev_init(struct cdev *, const struct file_operations *);

struct cdev *cdev_alloc(void);

void cdev_put(struct cdev *p);

int cdev_add(struct cdev *, dev_t, unsigned);

void cdev_set_parent(struct cdev *p, struct kobject *kobj);
int cdev_device_add(struct cdev *cdev, struct device *dev);
void cdev_device_del(struct cdev *cdev, struct device *dev);

void cdev_del(struct cdev *);

void cd_forget(struct inode *);
# 45 "../drivers/infiniband/core/uverbs.h" 2

# 1 "../include/rdma/ib_verbs.h" 1
# 15 "../include/rdma/ib_verbs.h"
# 1 "../include/linux/ethtool.h" 1
# 17 "../include/linux/ethtool.h"
# 1 "../include/linux/compat.h" 1
# 14 "../include/linux/compat.h"
# 1 "../include/linux/sem.h" 1




# 1 "../include/uapi/linux/sem.h" 1




# 1 "../include/linux/ipc.h" 1






# 1 "../include/linux/rhashtable-types.h" 1
# 18 "../include/linux/rhashtable-types.h"
struct rhash_head {
 struct rhash_head *next;
};

struct rhlist_head {
 struct rhash_head rhead;
 struct rhlist_head *next;
};

struct bucket_table;






struct rhashtable_compare_arg {
 struct rhashtable *ht;
 const void *key;
};

typedef u32 (*rht_hashfn_t)(const void *data, u32 len, u32 seed);
typedef u32 (*rht_obj_hashfn_t)(const void *data, u32 len, u32 seed);
typedef int (*rht_obj_cmpfn_t)(struct rhashtable_compare_arg *arg,
          const void *obj);
# 57 "../include/linux/rhashtable-types.h"
struct rhashtable_params {
 u16 nelem_hint;
 u16 key_len;
 u16 key_offset;
 u16 head_offset;
 unsigned int max_size;
 u16 min_size;
 bool automatic_shrinking;
 rht_hashfn_t hashfn;
 rht_obj_hashfn_t obj_hashfn;
 rht_obj_cmpfn_t obj_cmpfn;
};
# 82 "../include/linux/rhashtable-types.h"
struct rhashtable {
 struct bucket_table *tbl;
 unsigned int key_len;
 unsigned int max_elems;
 struct rhashtable_params p;
 bool rhlist;
 struct work_struct run_work;
 struct mutex mutex;
 spinlock_t lock;
 atomic_t nelems;



};





struct rhltable {
 struct rhashtable ht;
};






struct rhashtable_walker {
 struct list_head list;
 struct bucket_table *tbl;
};
# 124 "../include/linux/rhashtable-types.h"
struct rhashtable_iter {
 struct rhashtable *ht;
 struct rhash_head *p;
 struct rhlist_head *list;
 struct rhashtable_walker walker;
 unsigned int slot;
 unsigned int skip;
 bool end_of_table;
};

int rhashtable_init_noprof(struct rhashtable *ht,
      const struct rhashtable_params *params);


int rhltable_init_noprof(struct rhltable *hlt,
    const struct rhashtable_params *params);
# 8 "../include/linux/ipc.h" 2
# 1 "../include/uapi/linux/ipc.h" 1
# 10 "../include/uapi/linux/ipc.h"
struct ipc_perm
{
 __kernel_key_t key;
 __kernel_uid_t uid;
 __kernel_gid_t gid;
 __kernel_uid_t cuid;
 __kernel_gid_t cgid;
 __kernel_mode_t mode;
 unsigned short seq;
};


# 1 "./arch/hexagon/include/generated/uapi/asm/ipcbuf.h" 1
# 1 "../include/uapi/asm-generic/ipcbuf.h" 1
# 22 "../include/uapi/asm-generic/ipcbuf.h"
struct ipc64_perm {
 __kernel_key_t key;
 __kernel_uid32_t uid;
 __kernel_gid32_t gid;
 __kernel_uid32_t cuid;
 __kernel_gid32_t cgid;
 __kernel_mode_t mode;

 unsigned char __pad1[4 - sizeof(__kernel_mode_t)];
 unsigned short seq;
 unsigned short __pad2;
 __kernel_ulong_t __unused1;
 __kernel_ulong_t __unused2;
};
# 2 "./arch/hexagon/include/generated/uapi/asm/ipcbuf.h" 2
# 23 "../include/uapi/linux/ipc.h" 2
# 58 "../include/uapi/linux/ipc.h"
struct ipc_kludge {
 struct msgbuf *msgp;
 long msgtyp;
};
# 9 "../include/linux/ipc.h" 2



struct kern_ipc_perm {
 spinlock_t lock;
 bool deleted;
 int id;
 key_t key;
 kuid_t uid;
 kgid_t gid;
 kuid_t cuid;
 kgid_t cgid;
 umode_t mode;
 unsigned long seq;
 void *security;

 struct rhash_head khtnode;

 struct callback_head rcu;
 refcount_t refcount;
} ;
# 6 "../include/uapi/linux/sem.h" 2
# 25 "../include/uapi/linux/sem.h"
struct semid_ds {
 struct ipc_perm sem_perm;
 __kernel_old_time_t sem_otime;
 __kernel_old_time_t sem_ctime;
 struct sem *sem_base;
 struct sem_queue *sem_pending;
 struct sem_queue **sem_pending_last;
 struct sem_undo *undo;
 unsigned short sem_nsems;
};


# 1 "./arch/hexagon/include/generated/uapi/asm/sembuf.h" 1
# 1 "../include/uapi/asm-generic/sembuf.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 6 "../include/uapi/asm-generic/sembuf.h" 2
# 1 "./arch/hexagon/include/generated/uapi/asm/ipcbuf.h" 1
# 7 "../include/uapi/asm-generic/sembuf.h" 2
# 29 "../include/uapi/asm-generic/sembuf.h"
struct semid64_ds {
 struct ipc64_perm sem_perm;




 unsigned long sem_otime;
 unsigned long sem_otime_high;
 unsigned long sem_ctime;
 unsigned long sem_ctime_high;

 unsigned long sem_nsems;
 unsigned long __unused3;
 unsigned long __unused4;
};
# 2 "./arch/hexagon/include/generated/uapi/asm/sembuf.h" 2
# 38 "../include/uapi/linux/sem.h" 2


struct sembuf {
 unsigned short sem_num;
 short sem_op;
 short sem_flg;
};


union semun {
 int val;
 struct semid_ds *buf;
 unsigned short *array;
 struct seminfo *__buf;
 void *__pad;
};

struct seminfo {
 int semmap;
 int semmni;
 int semmns;
 int semmnu;
 int semmsl;
 int semopm;
 int semume;
 int semusz;
 int semvmx;
 int semaem;
};
# 6 "../include/linux/sem.h" 2


struct task_struct;



extern int copy_semundo(unsigned long clone_flags, struct task_struct *tsk);
extern void exit_sem(struct task_struct *tsk);
# 15 "../include/linux/compat.h" 2
# 1 "../include/linux/socket.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/socket.h" 1
# 1 "../include/uapi/asm-generic/socket.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/sockios.h" 1
# 1 "../include/uapi/asm-generic/sockios.h" 1
# 2 "./arch/hexagon/include/generated/uapi/asm/sockios.h" 2
# 7 "../include/uapi/asm-generic/socket.h" 2
# 2 "./arch/hexagon/include/generated/uapi/asm/socket.h" 2
# 7 "../include/linux/socket.h" 2
# 1 "../include/uapi/linux/sockios.h" 1
# 22 "../include/uapi/linux/sockios.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/bitsperlong.h" 1
# 23 "../include/uapi/linux/sockios.h" 2
# 1 "./arch/hexagon/include/generated/uapi/asm/sockios.h" 1
# 24 "../include/uapi/linux/sockios.h" 2
# 8 "../include/linux/socket.h" 2
# 1 "../include/linux/uio.h" 1
# 11 "../include/linux/uio.h"
# 1 "../include/uapi/linux/uio.h" 1
# 17 "../include/uapi/linux/uio.h"
struct iovec
{
 void *iov_base;
 __kernel_size_t iov_len;
};
# 12 "../include/linux/uio.h" 2

struct page;

typedef unsigned int iov_iter_extraction_t;

struct kvec {
 void *iov_base;
 size_t iov_len;
};

enum iter_type {

 ITER_UBUF,
 ITER_IOVEC,
 ITER_BVEC,
 ITER_KVEC,
 ITER_XARRAY,
 ITER_DISCARD,
};




struct iov_iter_state {
 size_t iov_offset;
 size_t count;
 unsigned long nr_segs;
};

struct iov_iter {
 u8 iter_type;
 bool nofault;
 bool data_source;
 size_t iov_offset;
# 56 "../include/linux/uio.h"
 union {





  struct iovec __ubuf_iovec;
  struct {
   union {

    const struct iovec *__iov;
    const struct kvec *kvec;
    const struct bio_vec *bvec;
    struct xarray *xarray;
    void *ubuf;
   };
   size_t count;
  };
 };
 union {
  unsigned long nr_segs;
  loff_t xarray_start;
 };
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct iovec *iter_iov(const struct iov_iter *iter)
{
 if (iter->iter_type == ITER_UBUF)
  return (const struct iovec *) &iter->__ubuf_iovec;
 return iter->__iov;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum iter_type iov_iter_type(const struct iov_iter *i)
{
 return i->iter_type;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iov_iter_save_state(struct iov_iter *iter,
           struct iov_iter_state *state)
{
 state->iov_offset = iter->iov_offset;
 state->count = iter->count;
 state->nr_segs = iter->nr_segs;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iter_is_ubuf(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_UBUF;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iter_is_iovec(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_IOVEC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iov_iter_is_kvec(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_KVEC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iov_iter_is_bvec(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_BVEC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iov_iter_is_discard(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_DISCARD;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iov_iter_is_xarray(const struct iov_iter *i)
{
 return iov_iter_type(i) == ITER_XARRAY;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char iov_iter_rw(const struct iov_iter *i)
{
 return i->data_source ? 1 : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool user_backed_iter(const struct iov_iter *i)
{
 return iter_is_ubuf(i) || iter_is_iovec(i);
}
# 151 "../include/linux/uio.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t iov_length(const struct iovec *iov, unsigned long nr_segs)
{
 unsigned long seg;
 size_t ret = 0;

 for (seg = 0; seg < nr_segs; seg++)
  ret += iov[seg].iov_len;
 return ret;
}

size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
      size_t bytes, struct iov_iter *i);
void iov_iter_advance(struct iov_iter *i, size_t bytes);
void iov_iter_revert(struct iov_iter *i, size_t bytes);
size_t fault_in_iov_iter_readable(const struct iov_iter *i, size_t bytes);
size_t fault_in_iov_iter_writeable(const struct iov_iter *i, size_t bytes);
size_t iov_iter_single_seg_count(const struct iov_iter *i);
size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
    struct iov_iter *i);
size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
    struct iov_iter *i);

size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i);
size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i);
size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t copy_folio_to_iter(struct folio *folio, size_t offset,
  size_t bytes, struct iov_iter *i)
{
 return copy_page_to_iter(&folio->page, offset, bytes, i);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t copy_folio_from_iter_atomic(struct folio *folio,
  size_t offset, size_t bytes, struct iov_iter *i)
{
 return copy_page_from_iter_atomic(&folio->page, offset, bytes, i);
}

size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
     size_t bytes, struct iov_iter *i);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
 if (check_copy_size(addr, bytes, true))
  return _copy_to_iter(addr, bytes, i);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
{
 if (check_copy_size(addr, bytes, false))
  return _copy_from_iter(addr, bytes, i);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
bool copy_to_iter_full(const void *addr, size_t bytes, struct iov_iter *i)
{
 size_t copied = copy_to_iter(addr, bytes, i);
 if (__builtin_expect(!!(copied == bytes), 1))
  return true;
 iov_iter_revert(i, copied);
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
bool copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i)
{
 size_t copied = copy_from_iter(addr, bytes, i);
 if (__builtin_expect(!!(copied == bytes), 1))
  return true;
 iov_iter_revert(i, copied);
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i)
{
 if (check_copy_size(addr, bytes, false))
  return _copy_from_iter_nocache(addr, bytes, i);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__))
bool copy_from_iter_full_nocache(void *addr, size_t bytes, struct iov_iter *i)
{
 size_t copied = copy_from_iter_nocache(addr, bytes, i);
 if (__builtin_expect(!!(copied == bytes), 1))
  return true;
 iov_iter_revert(i, copied);
 return false;
}
# 264 "../include/linux/uio.h"
size_t iov_iter_zero(size_t bytes, struct iov_iter *);
bool iov_iter_is_aligned(const struct iov_iter *i, unsigned addr_mask,
   unsigned len_mask);
unsigned long iov_iter_alignment(const struct iov_iter *i);
unsigned long iov_iter_gap_alignment(const struct iov_iter *i);
void iov_iter_init(struct iov_iter *i, unsigned int direction, const struct iovec *iov,
   unsigned long nr_segs, size_t count);
void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec *kvec,
   unsigned long nr_segs, size_t count);
void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec,
   unsigned long nr_segs, size_t count);
void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count);
void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray,
       loff_t start, size_t count);
ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages,
   size_t maxsize, unsigned maxpages, size_t *start);
ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, struct page ***pages,
   size_t maxsize, size_t *start);
int iov_iter_npages(const struct iov_iter *i, int maxpages);
void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state);

const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t iov_iter_count(const struct iov_iter *i)
{
 return i->count;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iov_iter_truncate(struct iov_iter *i, u64 count)
{






 if (i->count > count)
  i->count = count;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iov_iter_reexpand(struct iov_iter *i, size_t count)
{
 i->count = count;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
iov_iter_npages_cap(struct iov_iter *i, int maxpages, size_t max_bytes)
{
 size_t shorted = 0;
 int npages;

 if (iov_iter_count(i) > max_bytes) {
  shorted = iov_iter_count(i) - max_bytes;
  iov_iter_truncate(i, max_bytes);
 }
 npages = iov_iter_npages(i, maxpages);
 if (shorted)
  iov_iter_reexpand(i, iov_iter_count(i) + shorted);

 return npages;
}

struct iovec *iovec_from_user(const struct iovec *uvector,
  unsigned long nr_segs, unsigned long fast_segs,
  struct iovec *fast_iov, bool compat);
ssize_t import_iovec(int type, const struct iovec *uvec,
   unsigned nr_segs, unsigned fast_segs, struct iovec **iovp,
   struct iov_iter *i);
ssize_t __import_iovec(int type, const struct iovec *uvec,
   unsigned nr_segs, unsigned fast_segs, struct iovec **iovp,
   struct iov_iter *i, bool compat);
int import_ubuf(int type, void *buf, size_t len, struct iov_iter *i);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iov_iter_ubuf(struct iov_iter *i, unsigned int direction,
   void *buf, size_t count)
{
 ({ int __ret_warn_on = !!(direction & ~(0 | 1)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/uio.h", 350, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 *i = (struct iov_iter) {
  .iter_type = ITER_UBUF,
  .data_source = direction,
  .ubuf = buf,
  .count = count,
  .nr_segs = 1
 };
}




ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages,
          size_t maxsize, unsigned int maxpages,
          iov_iter_extraction_t extraction_flags,
          size_t *offset0);
# 384 "../include/linux/uio.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iov_iter_extract_will_pin(const struct iov_iter *iter)
{
 return user_backed_iter(iter);
}

struct sg_table;
ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t len,
      struct sg_table *sgtable, unsigned int sg_max,
      iov_iter_extraction_t extraction_flags);
# 9 "../include/linux/socket.h" 2


# 1 "../include/uapi/linux/socket.h" 1
# 10 "../include/uapi/linux/socket.h"
typedef unsigned short __kernel_sa_family_t;





struct __kernel_sockaddr_storage {
 union {
  struct {
   __kernel_sa_family_t ss_family;

   char __data[128 - sizeof(unsigned short)];


  };
  void *__align;
 };
};
# 12 "../include/linux/socket.h" 2

struct file;
struct pid;
struct cred;
struct socket;
struct sock;
struct sk_buff;
struct proto_accept_arg;





struct seq_file;
extern void socket_seq_show(struct seq_file *seq);


typedef __kernel_sa_family_t sa_family_t;





struct sockaddr {
 sa_family_t sa_family;
 union {
  char sa_data_min[14];
  struct { struct { } __empty_sa_data; char sa_data[]; };
 };
};

struct linger {
 int l_onoff;
 int l_linger;
};
# 56 "../include/linux/socket.h"
struct msghdr {
 void *msg_name;
 int msg_namelen;

 int msg_inq;

 struct iov_iter msg_iter;






 union {
  void *msg_control;
  void *msg_control_user;
 };
 bool msg_control_is_user : 1;
 bool msg_get_inq : 1;
 unsigned int msg_flags;
 __kernel_size_t msg_controllen;
 struct kiocb *msg_iocb;
 struct ubuf_info *msg_ubuf;
 int (*sg_from_iter)(struct sk_buff *skb,
       struct iov_iter *from, size_t length);
};

struct user_msghdr {
 void *msg_name;
 int msg_namelen;
 struct iovec *msg_iov;
 __kernel_size_t msg_iovlen;
 void *msg_control;
 __kernel_size_t msg_controllen;
 unsigned int msg_flags;
};


struct mmsghdr {
 struct user_msghdr msg_hdr;
 unsigned int msg_len;
};







struct cmsghdr {
 __kernel_size_t cmsg_len;
        int cmsg_level;
        int cmsg_type;
};
# 154 "../include/linux/socket.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cmsghdr * __cmsg_nxthdr(void *__ctl, __kernel_size_t __size,
            struct cmsghdr *__cmsg)
{
 struct cmsghdr * __ptr;

 __ptr = (struct cmsghdr*)(((unsigned char *) __cmsg) + ( ((__cmsg->cmsg_len)+sizeof(long)-1) & ~(sizeof(long)-1) ));
 if ((unsigned long)((char*)(__ptr+1) - (char *) __ctl) > __size)
  return (struct cmsghdr *)0;

 return __ptr;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cmsghdr * cmsg_nxthdr (struct msghdr *__msg, struct cmsghdr *__cmsg)
{
 return __cmsg_nxthdr(__msg->msg_control, __msg->msg_controllen, __cmsg);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t msg_data_left(struct msghdr *msg)
{
 return iov_iter_count(&msg->msg_iter);
}
# 183 "../include/linux/socket.h"
struct ucred {
 __u32 pid;
 __u32 uid;
 __u32 gid;
};
# 392 "../include/linux/socket.h"
extern int move_addr_to_kernel(void *uaddr, int ulen, struct __kernel_sockaddr_storage *kaddr);
extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data);

struct timespec64;
struct __kernel_timespec;
struct old_timespec32;

struct scm_timestamping_internal {
 struct timespec64 ts[3];
};

extern void put_cmsg_scm_timestamping64(struct msghdr *msg, struct scm_timestamping_internal *tss);
extern void put_cmsg_scm_timestamping(struct msghdr *msg, struct scm_timestamping_internal *tss);




extern long __sys_recvmsg(int fd, struct user_msghdr *msg,
     unsigned int flags, bool forbid_cmsg_compat);
extern long __sys_sendmsg(int fd, struct user_msghdr *msg,
     unsigned int flags, bool forbid_cmsg_compat);
extern int __sys_recvmmsg(int fd, struct mmsghdr *mmsg,
     unsigned int vlen, unsigned int flags,
     struct __kernel_timespec *timeout,
     struct old_timespec32 *timeout32);
extern int __sys_sendmmsg(int fd, struct mmsghdr *mmsg,
     unsigned int vlen, unsigned int flags,
     bool forbid_cmsg_compat);
extern long __sys_sendmsg_sock(struct socket *sock, struct msghdr *msg,
          unsigned int flags);
extern long __sys_recvmsg_sock(struct socket *sock, struct msghdr *msg,
          struct user_msghdr *umsg,
          struct sockaddr *uaddr,
          unsigned int flags);
extern int __copy_msghdr(struct msghdr *kmsg,
    struct user_msghdr *umsg,
    struct sockaddr **save_addr);


extern int __sys_recvfrom(int fd, void *ubuf, size_t size,
     unsigned int flags, struct sockaddr *addr,
     int *addr_len);
extern int __sys_sendto(int fd, void *buff, size_t len,
   unsigned int flags, struct sockaddr *addr,
   int addr_len);
extern struct file *do_accept(struct file *file, struct proto_accept_arg *arg,
         struct sockaddr *upeer_sockaddr,
         int *upeer_addrlen, int flags);
extern int __sys_accept4(int fd, struct sockaddr *upeer_sockaddr,
    int *upeer_addrlen, int flags);
extern int __sys_socket(int family, int type, int protocol);
extern struct file *__sys_socket_file(int family, int type, int protocol);
extern int __sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
extern int __sys_bind_socket(struct socket *sock, struct __kernel_sockaddr_storage *address,
        int addrlen);
extern int __sys_connect_file(struct file *file, struct __kernel_sockaddr_storage *addr,
         int addrlen, int file_flags);
extern int __sys_connect(int fd, struct sockaddr *uservaddr,
    int addrlen);
extern int __sys_listen(int fd, int backlog);
extern int __sys_listen_socket(struct socket *sock, int backlog);
extern int __sys_getsockname(int fd, struct sockaddr *usockaddr,
        int *usockaddr_len);
extern int __sys_getpeername(int fd, struct sockaddr *usockaddr,
        int *usockaddr_len);
extern int __sys_socketpair(int family, int type, int protocol,
       int *usockvec);
extern int __sys_shutdown_sock(struct socket *sock, int how);
extern int __sys_shutdown(int fd, int how);
# 16 "../include/linux/compat.h" 2
# 1 "../include/uapi/linux/if.h" 1
# 23 "../include/uapi/linux/if.h"
# 1 "../include/uapi/linux/libc-compat.h" 1
# 24 "../include/uapi/linux/if.h" 2
# 37 "../include/uapi/linux/if.h"
# 1 "../include/uapi/linux/hdlc/ioctl.h" 1
# 40 "../include/uapi/linux/hdlc/ioctl.h"
typedef struct {
 unsigned int clock_rate;
 unsigned int clock_type;
 unsigned short loopback;
} sync_serial_settings;

typedef struct {
 unsigned int clock_rate;
 unsigned int clock_type;
 unsigned short loopback;
 unsigned int slot_map;
} te1_settings;

typedef struct {
 unsigned short encoding;
 unsigned short parity;
} raw_hdlc_proto;

typedef struct {
 unsigned int t391;
 unsigned int t392;
 unsigned int n391;
 unsigned int n392;
 unsigned int n393;
 unsigned short lmi;
 unsigned short dce;
} fr_proto;

typedef struct {
 unsigned int dlci;
} fr_proto_pvc;

typedef struct {
 unsigned int dlci;
 char master[16];
}fr_proto_pvc_info;

typedef struct {
    unsigned int interval;
    unsigned int timeout;
} cisco_proto;

typedef struct {
 unsigned short dce;
 unsigned int modulo;
 unsigned int window;
 unsigned int t1;
 unsigned int t2;
 unsigned int n2;
} x25_hdlc_proto;
# 38 "../include/uapi/linux/if.h" 2
# 82 "../include/uapi/linux/if.h"
enum net_device_flags {


 IFF_UP = 1<<0,
 IFF_BROADCAST = 1<<1,
 IFF_DEBUG = 1<<2,
 IFF_LOOPBACK = 1<<3,
 IFF_POINTOPOINT = 1<<4,
 IFF_NOTRAILERS = 1<<5,
 IFF_RUNNING = 1<<6,
 IFF_NOARP = 1<<7,
 IFF_PROMISC = 1<<8,
 IFF_ALLMULTI = 1<<9,
 IFF_MASTER = 1<<10,
 IFF_SLAVE = 1<<11,
 IFF_MULTICAST = 1<<12,
 IFF_PORTSEL = 1<<13,
 IFF_AUTOMEDIA = 1<<14,
 IFF_DYNAMIC = 1<<15,


 IFF_LOWER_UP = 1<<16,
 IFF_DORMANT = 1<<17,
 IFF_ECHO = 1<<18,

};
# 167 "../include/uapi/linux/if.h"
enum {
 IF_OPER_UNKNOWN,
 IF_OPER_NOTPRESENT,
 IF_OPER_DOWN,
 IF_OPER_LOWERLAYERDOWN,
 IF_OPER_TESTING,
 IF_OPER_DORMANT,
 IF_OPER_UP,
};


enum {
 IF_LINK_MODE_DEFAULT,
 IF_LINK_MODE_DORMANT,
 IF_LINK_MODE_TESTING,
};
# 196 "../include/uapi/linux/if.h"
struct ifmap {
 unsigned long mem_start;
 unsigned long mem_end;
 unsigned short base_addr;
 unsigned char irq;
 unsigned char dma;
 unsigned char port;

};


struct if_settings {
 unsigned int type;
 unsigned int size;
 union {

  raw_hdlc_proto *raw_hdlc;
  cisco_proto *cisco;
  fr_proto *fr;
  fr_proto_pvc *fr_pvc;
  fr_proto_pvc_info *fr_pvc_info;
  x25_hdlc_proto *x25;


  sync_serial_settings *sync;
  te1_settings *te1;
 } ifs_ifsu;
};
# 234 "../include/uapi/linux/if.h"
struct ifreq {

 union
 {
  char ifrn_name[16];
 } ifr_ifrn;

 union {
  struct sockaddr ifru_addr;
  struct sockaddr ifru_dstaddr;
  struct sockaddr ifru_broadaddr;
  struct sockaddr ifru_netmask;
  struct sockaddr ifru_hwaddr;
  short ifru_flags;
  int ifru_ivalue;
  int ifru_mtu;
  struct ifmap ifru_map;
  char ifru_slave[16];
  char ifru_newname[16];
  void * ifru_data;
  struct if_settings ifru_settings;
 } ifr_ifru;
};
# 286 "../include/uapi/linux/if.h"
struct ifconf {
 int ifc_len;
 union {
  char *ifcu_buf;
  struct ifreq *ifcu_req;
 } ifc_ifcu;
};
# 17 "../include/linux/compat.h" 2
# 1 "../include/linux/fs.h" 1





# 1 "../include/linux/wait_bit.h" 1
# 10 "../include/linux/wait_bit.h"
struct wait_bit_key {
 void *flags;
 int bit_nr;
 unsigned long timeout;
};

struct wait_bit_queue_entry {
 struct wait_bit_key key;
 struct wait_queue_entry wq_entry;
};




typedef int wait_bit_action_f(struct wait_bit_key *key, int mode);

void __wake_up_bit(struct wait_queue_head *wq_head, void *word, int bit);
int __wait_on_bit(struct wait_queue_head *wq_head, struct wait_bit_queue_entry *wbq_entry, wait_bit_action_f *action, unsigned int mode);
int __wait_on_bit_lock(struct wait_queue_head *wq_head, struct wait_bit_queue_entry *wbq_entry, wait_bit_action_f *action, unsigned int mode);
void wake_up_bit(void *word, int bit);
int out_of_line_wait_on_bit(void *word, int, wait_bit_action_f *action, unsigned int mode);
int out_of_line_wait_on_bit_timeout(void *word, int, wait_bit_action_f *action, unsigned int mode, unsigned long timeout);
int out_of_line_wait_on_bit_lock(void *word, int, wait_bit_action_f *action, unsigned int mode);
struct wait_queue_head *bit_waitqueue(void *word, int bit);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) wait_bit_init(void);

int wake_bit_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
# 49 "../include/linux/wait_bit.h"
extern int bit_wait(struct wait_bit_key *key, int mode);
extern int bit_wait_io(struct wait_bit_key *key, int mode);
extern int bit_wait_timeout(struct wait_bit_key *key, int mode);
extern int bit_wait_io_timeout(struct wait_bit_key *key, int mode);
# 70 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit(unsigned long *word, int bit, unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!((__builtin_constant_p(bit) && __builtin_constant_p((uintptr_t)(word) != (uintptr_t)((void *)0)) && (uintptr_t)(word) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(word))) ? generic_test_bit_acquire(bit, word) : arch_test_bit_acquire(bit, word)))
  return 0;
 return out_of_line_wait_on_bit(word, bit,
           bit_wait,
           mode);
}
# 95 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_io(unsigned long *word, int bit, unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!((__builtin_constant_p(bit) && __builtin_constant_p((uintptr_t)(word) != (uintptr_t)((void *)0)) && (uintptr_t)(word) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(word))) ? generic_test_bit_acquire(bit, word) : arch_test_bit_acquire(bit, word)))
  return 0;
 return out_of_line_wait_on_bit(word, bit,
           bit_wait_io,
           mode);
}
# 121 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_timeout(unsigned long *word, int bit, unsigned mode,
      unsigned long timeout)
{
 do { do { } while (0); } while (0);
 if (!((__builtin_constant_p(bit) && __builtin_constant_p((uintptr_t)(word) != (uintptr_t)((void *)0)) && (uintptr_t)(word) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(word))) ? generic_test_bit_acquire(bit, word) : arch_test_bit_acquire(bit, word)))
  return 0;
 return out_of_line_wait_on_bit_timeout(word, bit,
            bit_wait_timeout,
            mode, timeout);
}
# 149 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_action(unsigned long *word, int bit, wait_bit_action_f *action,
     unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!((__builtin_constant_p(bit) && __builtin_constant_p((uintptr_t)(word) != (uintptr_t)((void *)0)) && (uintptr_t)(word) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(word))) ? generic_test_bit_acquire(bit, word) : arch_test_bit_acquire(bit, word)))
  return 0;
 return out_of_line_wait_on_bit(word, bit, action, mode);
}
# 178 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_lock(unsigned long *word, int bit, unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!test_and_set_bit(bit, word))
  return 0;
 return out_of_line_wait_on_bit_lock(word, bit, bit_wait, mode);
}
# 202 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_lock_io(unsigned long *word, int bit, unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!test_and_set_bit(bit, word))
  return 0;
 return out_of_line_wait_on_bit_lock(word, bit, bit_wait_io, mode);
}
# 228 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
wait_on_bit_lock_action(unsigned long *word, int bit, wait_bit_action_f *action,
   unsigned mode)
{
 do { do { } while (0); } while (0);
 if (!test_and_set_bit(bit, word))
  return 0;
 return out_of_line_wait_on_bit_lock(word, bit, action, mode);
}

extern void init_wait_var_entry(struct wait_bit_queue_entry *wbq_entry, void *var, int flags);
extern void wake_up_var(void *var);
extern wait_queue_head_t *__var_waitqueue(void *p);
# 330 "../include/linux/wait_bit.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_and_wake_up_bit(int bit, void *word)
{
 clear_bit_unlock(bit, word);

 __asm__ __volatile__("": : :"memory");
 wake_up_bit(word, bit);
}
# 7 "../include/linux/fs.h" 2

# 1 "../include/linux/dcache.h" 1







# 1 "../include/linux/rculist.h" 1
# 22 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void INIT_LIST_HEAD_RCU(struct list_head *list)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_158(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->next) == sizeof(char) || sizeof(list->next) == sizeof(short) || sizeof(list->next) == sizeof(int) || sizeof(list->next) == sizeof(long)) || sizeof(list->next) == sizeof(long long))) __compiletime_assert_158(); } while (0); do { *(volatile typeof(list->next) *)&(list->next) = (list); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_159(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->prev) == sizeof(char) || sizeof(list->prev) == sizeof(short) || sizeof(list->prev) == sizeof(int) || sizeof(list->prev) == sizeof(long)) || sizeof(list->prev) == sizeof(long long))) __compiletime_assert_159(); } while (0); do { *(volatile typeof(list->prev) *)&(list->prev) = (list); } while (0); } while (0);
}
# 76 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_add_rcu(struct list_head *new,
  struct list_head *prev, struct list_head *next)
{
 if (!__list_add_valid(new, prev, next))
  return;

 new->next = next;
 new->prev = prev;
 do { uintptr_t _r_a_p__v = (uintptr_t)(new); ; if (__builtin_constant_p(new) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_160(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(char) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(short) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(int) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(long)) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(long long))) __compiletime_assert_160(); } while (0); do { *(volatile typeof(((*((struct list_head **)(&(prev)->next))))) *)&(((*((struct list_head **)(&(prev)->next))))) = ((typeof((*((struct list_head **)(&(prev)->next)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_161(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(char) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(short) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(int) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(long)) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(long long))) __compiletime_assert_161(); } while (0); do { *(volatile typeof(*&(*((struct list_head **)(&(prev)->next)))) *)&(*&(*((struct list_head **)(&(prev)->next)))) = ((typeof(*((typeof((*((struct list_head **)(&(prev)->next)))))_r_a_p__v)) *)((typeof((*((struct list_head **)(&(prev)->next)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 next->prev = new;
}
# 104 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_add_rcu(struct list_head *new, struct list_head *head)
{
 __list_add_rcu(new, head, head->next);
}
# 125 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_add_tail_rcu(struct list_head *new,
     struct list_head *head)
{
 __list_add_rcu(new, head->prev, head);
}
# 155 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_del_rcu(struct list_head *entry)
{
 __list_del_entry(entry);
 entry->prev = ((void *) 0x122 + 0);
}
# 181 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_del_init_rcu(struct hlist_node *n)
{
 if (!hlist_unhashed(n)) {
  __hlist_del(n);
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_162(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_162(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (((void *)0)); } while (0); } while (0);
 }
}
# 197 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_replace_rcu(struct list_head *old,
    struct list_head *new)
{
 new->next = old->next;
 new->prev = old->prev;
 do { uintptr_t _r_a_p__v = (uintptr_t)(new); ; if (__builtin_constant_p(new) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_163(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct list_head **)(&(new->prev)->next))))) == sizeof(char) || sizeof(((*((struct list_head **)(&(new->prev)->next))))) == sizeof(short) || sizeof(((*((struct list_head **)(&(new->prev)->next))))) == sizeof(int) || sizeof(((*((struct list_head **)(&(new->prev)->next))))) == sizeof(long)) || sizeof(((*((struct list_head **)(&(new->prev)->next))))) == sizeof(long long))) __compiletime_assert_163(); } while (0); do { *(volatile typeof(((*((struct list_head **)(&(new->prev)->next))))) *)&(((*((struct list_head **)(&(new->prev)->next))))) = ((typeof((*((struct list_head **)(&(new->prev)->next)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_164(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct list_head **)(&(new->prev)->next)))) == sizeof(char) || sizeof(*&(*((struct list_head **)(&(new->prev)->next)))) == sizeof(short) || sizeof(*&(*((struct list_head **)(&(new->prev)->next)))) == sizeof(int) || sizeof(*&(*((struct list_head **)(&(new->prev)->next)))) == sizeof(long)) || sizeof(*&(*((struct list_head **)(&(new->prev)->next)))) == sizeof(long long))) __compiletime_assert_164(); } while (0); do { *(volatile typeof(*&(*((struct list_head **)(&(new->prev)->next)))) *)&(*&(*((struct list_head **)(&(new->prev)->next)))) = ((typeof(*((typeof((*((struct list_head **)(&(new->prev)->next)))))_r_a_p__v)) *)((typeof((*((struct list_head **)(&(new->prev)->next)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 new->next->prev = new;
 old->prev = ((void *) 0x122 + 0);
}
# 226 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __list_splice_init_rcu(struct list_head *list,
       struct list_head *prev,
       struct list_head *next,
       void (*sync)(void))
{
 struct list_head *first = list->next;
 struct list_head *last = list->prev;







 INIT_LIST_HEAD_RCU(list);
# 249 "../include/linux/rculist.h"
 sync();
 __kcsan_check_access(&(*first), sizeof(*first), (1 << 0) | (1 << 3));
 __kcsan_check_access(&(*last), sizeof(*last), (1 << 0) | (1 << 3));
# 261 "../include/linux/rculist.h"
 last->next = next;
 do { uintptr_t _r_a_p__v = (uintptr_t)(first); ; if (__builtin_constant_p(first) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_165(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(char) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(short) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(int) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(long)) || sizeof(((*((struct list_head **)(&(prev)->next))))) == sizeof(long long))) __compiletime_assert_165(); } while (0); do { *(volatile typeof(((*((struct list_head **)(&(prev)->next))))) *)&(((*((struct list_head **)(&(prev)->next))))) = ((typeof((*((struct list_head **)(&(prev)->next)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_166(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(char) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(short) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(int) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(long)) || sizeof(*&(*((struct list_head **)(&(prev)->next)))) == sizeof(long long))) __compiletime_assert_166(); } while (0); do { *(volatile typeof(*&(*((struct list_head **)(&(prev)->next)))) *)&(*&(*((struct list_head **)(&(prev)->next)))) = ((typeof(*((typeof((*((struct list_head **)(&(prev)->next)))))_r_a_p__v)) *)((typeof((*((struct list_head **)(&(prev)->next)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 first->prev = prev;
 next->prev = last;
}
# 274 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice_init_rcu(struct list_head *list,
     struct list_head *head,
     void (*sync)(void))
{
 if (!list_empty(list))
  __list_splice_init_rcu(list, head, head->next, sync);
}
# 289 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void list_splice_tail_init_rcu(struct list_head *list,
          struct list_head *head,
          void (*sync)(void))
{
 if (!list_empty(list))
  __list_splice_init_rcu(list, head->prev, head, sync);
}
# 511 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_del_rcu(struct hlist_node *n)
{
 __hlist_del(n);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_167(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_167(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (((void *) 0x122 + 0)); } while (0); } while (0);
}
# 524 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_replace_rcu(struct hlist_node *old,
     struct hlist_node *new)
{
 struct hlist_node *next = old->next;

 new->next = next;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_168(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(new->pprev) == sizeof(char) || sizeof(new->pprev) == sizeof(short) || sizeof(new->pprev) == sizeof(int) || sizeof(new->pprev) == sizeof(long)) || sizeof(new->pprev) == sizeof(long long))) __compiletime_assert_168(); } while (0); do { *(volatile typeof(new->pprev) *)&(new->pprev) = (old->pprev); } while (0); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(new); ; if (__builtin_constant_p(new) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_169(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((*(struct hlist_node **)new->pprev)) == sizeof(char) || sizeof((*(struct hlist_node **)new->pprev)) == sizeof(short) || sizeof((*(struct hlist_node **)new->pprev)) == sizeof(int) || sizeof((*(struct hlist_node **)new->pprev)) == sizeof(long)) || sizeof((*(struct hlist_node **)new->pprev)) == sizeof(long long))) __compiletime_assert_169(); } while (0); do { *(volatile typeof((*(struct hlist_node **)new->pprev)) *)&((*(struct hlist_node **)new->pprev)) = ((typeof(*(struct hlist_node **)new->pprev))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_170(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&*(struct hlist_node **)new->pprev) == sizeof(char) || sizeof(*&*(struct hlist_node **)new->pprev) == sizeof(short) || sizeof(*&*(struct hlist_node **)new->pprev) == sizeof(int) || sizeof(*&*(struct hlist_node **)new->pprev) == sizeof(long)) || sizeof(*&*(struct hlist_node **)new->pprev) == sizeof(long long))) __compiletime_assert_170(); } while (0); do { *(volatile typeof(*&*(struct hlist_node **)new->pprev) *)&(*&*(struct hlist_node **)new->pprev) = ((typeof(*((typeof(*(struct hlist_node **)new->pprev))_r_a_p__v)) *)((typeof(*(struct hlist_node **)new->pprev))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 if (next)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_171(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(new->next->pprev) == sizeof(char) || sizeof(new->next->pprev) == sizeof(short) || sizeof(new->next->pprev) == sizeof(int) || sizeof(new->next->pprev) == sizeof(long)) || sizeof(new->next->pprev) == sizeof(long long))) __compiletime_assert_171(); } while (0); do { *(volatile typeof(new->next->pprev) *)&(new->next->pprev) = (&new->next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_172(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(old->pprev) == sizeof(char) || sizeof(old->pprev) == sizeof(short) || sizeof(old->pprev) == sizeof(int) || sizeof(old->pprev) == sizeof(long)) || sizeof(old->pprev) == sizeof(long long))) __compiletime_assert_172(); } while (0); do { *(volatile typeof(old->pprev) *)&(old->pprev) = (((void *) 0x122 + 0)); } while (0); } while (0);
}
# 547 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlists_swap_heads_rcu(struct hlist_head *left, struct hlist_head *right)
{
 struct hlist_node *node1 = left->first;
 struct hlist_node *node2 = right->first;

 do { uintptr_t _r_a_p__v = (uintptr_t)(node2); ; if (__builtin_constant_p(node2) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_173(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((left->first)) == sizeof(char) || sizeof((left->first)) == sizeof(short) || sizeof((left->first)) == sizeof(int) || sizeof((left->first)) == sizeof(long)) || sizeof((left->first)) == sizeof(long long))) __compiletime_assert_173(); } while (0); do { *(volatile typeof((left->first)) *)&((left->first)) = ((typeof(left->first))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_174(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&left->first) == sizeof(char) || sizeof(*&left->first) == sizeof(short) || sizeof(*&left->first) == sizeof(int) || sizeof(*&left->first) == sizeof(long)) || sizeof(*&left->first) == sizeof(long long))) __compiletime_assert_174(); } while (0); do { *(volatile typeof(*&left->first) *)&(*&left->first) = ((typeof(*((typeof(left->first))_r_a_p__v)) *)((typeof(left->first))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(node1); ; if (__builtin_constant_p(node1) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_175(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((right->first)) == sizeof(char) || sizeof((right->first)) == sizeof(short) || sizeof((right->first)) == sizeof(int) || sizeof((right->first)) == sizeof(long)) || sizeof((right->first)) == sizeof(long long))) __compiletime_assert_175(); } while (0); do { *(volatile typeof((right->first)) *)&((right->first)) = ((typeof(right->first))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_176(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&right->first) == sizeof(char) || sizeof(*&right->first) == sizeof(short) || sizeof(*&right->first) == sizeof(int) || sizeof(*&right->first) == sizeof(long)) || sizeof(*&right->first) == sizeof(long long))) __compiletime_assert_176(); } while (0); do { *(volatile typeof(*&right->first) *)&(*&right->first) = ((typeof(*((typeof(right->first))_r_a_p__v)) *)((typeof(right->first))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_177(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(node2->pprev) == sizeof(char) || sizeof(node2->pprev) == sizeof(short) || sizeof(node2->pprev) == sizeof(int) || sizeof(node2->pprev) == sizeof(long)) || sizeof(node2->pprev) == sizeof(long long))) __compiletime_assert_177(); } while (0); do { *(volatile typeof(node2->pprev) *)&(node2->pprev) = (&left->first); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_178(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(node1->pprev) == sizeof(char) || sizeof(node1->pprev) == sizeof(short) || sizeof(node1->pprev) == sizeof(int) || sizeof(node1->pprev) == sizeof(long)) || sizeof(node1->pprev) == sizeof(long long))) __compiletime_assert_178(); } while (0); do { *(volatile typeof(node1->pprev) *)&(node1->pprev) = (&right->first); } while (0); } while (0);
}
# 584 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_head_rcu(struct hlist_node *n,
     struct hlist_head *h)
{
 struct hlist_node *first = h->first;

 n->next = first;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_179(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_179(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&h->first); } while (0); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_180(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_node **)(&(h)->first))))) == sizeof(char) || sizeof(((*((struct hlist_node **)(&(h)->first))))) == sizeof(short) || sizeof(((*((struct hlist_node **)(&(h)->first))))) == sizeof(int) || sizeof(((*((struct hlist_node **)(&(h)->first))))) == sizeof(long)) || sizeof(((*((struct hlist_node **)(&(h)->first))))) == sizeof(long long))) __compiletime_assert_180(); } while (0); do { *(volatile typeof(((*((struct hlist_node **)(&(h)->first))))) *)&(((*((struct hlist_node **)(&(h)->first))))) = ((typeof((*((struct hlist_node **)(&(h)->first)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_181(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_node **)(&(h)->first)))) == sizeof(char) || sizeof(*&(*((struct hlist_node **)(&(h)->first)))) == sizeof(short) || sizeof(*&(*((struct hlist_node **)(&(h)->first)))) == sizeof(int) || sizeof(*&(*((struct hlist_node **)(&(h)->first)))) == sizeof(long)) || sizeof(*&(*((struct hlist_node **)(&(h)->first)))) == sizeof(long long))) __compiletime_assert_181(); } while (0); do { *(volatile typeof(*&(*((struct hlist_node **)(&(h)->first)))) *)&(*&(*((struct hlist_node **)(&(h)->first)))) = ((typeof(*((typeof((*((struct hlist_node **)(&(h)->first)))))_r_a_p__v)) *)((typeof((*((struct hlist_node **)(&(h)->first)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 if (first)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_182(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(first->pprev) == sizeof(char) || sizeof(first->pprev) == sizeof(short) || sizeof(first->pprev) == sizeof(int) || sizeof(first->pprev) == sizeof(long)) || sizeof(first->pprev) == sizeof(long long))) __compiletime_assert_182(); } while (0); do { *(volatile typeof(first->pprev) *)&(first->pprev) = (&n->next); } while (0); } while (0);
}
# 615 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_tail_rcu(struct hlist_node *n,
          struct hlist_head *h)
{
 struct hlist_node *i, *last = ((void *)0);


 for (i = h->first; i; i = i->next)
  last = i;

 if (last) {
  n->next = last->next;
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_183(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_183(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&last->next); } while (0); } while (0);
  do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_184(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_node **)(&(last)->next))))) == sizeof(char) || sizeof(((*((struct hlist_node **)(&(last)->next))))) == sizeof(short) || sizeof(((*((struct hlist_node **)(&(last)->next))))) == sizeof(int) || sizeof(((*((struct hlist_node **)(&(last)->next))))) == sizeof(long)) || sizeof(((*((struct hlist_node **)(&(last)->next))))) == sizeof(long long))) __compiletime_assert_184(); } while (0); do { *(volatile typeof(((*((struct hlist_node **)(&(last)->next))))) *)&(((*((struct hlist_node **)(&(last)->next))))) = ((typeof((*((struct hlist_node **)(&(last)->next)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_185(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_node **)(&(last)->next)))) == sizeof(char) || sizeof(*&(*((struct hlist_node **)(&(last)->next)))) == sizeof(short) || sizeof(*&(*((struct hlist_node **)(&(last)->next)))) == sizeof(int) || sizeof(*&(*((struct hlist_node **)(&(last)->next)))) == sizeof(long)) || sizeof(*&(*((struct hlist_node **)(&(last)->next)))) == sizeof(long long))) __compiletime_assert_185(); } while (0); do { *(volatile typeof(*&(*((struct hlist_node **)(&(last)->next)))) *)&(*&(*((struct hlist_node **)(&(last)->next)))) = ((typeof(*((typeof((*((struct hlist_node **)(&(last)->next)))))_r_a_p__v)) *)((typeof((*((struct hlist_node **)(&(last)->next)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 } else {
  hlist_add_head_rcu(n, h);
 }
}
# 651 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_before_rcu(struct hlist_node *n,
     struct hlist_node *next)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_186(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_186(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (next->pprev); } while (0); } while (0);
 n->next = next;
 do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_187(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_node **)((n)->pprev))))) == sizeof(char) || sizeof(((*((struct hlist_node **)((n)->pprev))))) == sizeof(short) || sizeof(((*((struct hlist_node **)((n)->pprev))))) == sizeof(int) || sizeof(((*((struct hlist_node **)((n)->pprev))))) == sizeof(long)) || sizeof(((*((struct hlist_node **)((n)->pprev))))) == sizeof(long long))) __compiletime_assert_187(); } while (0); do { *(volatile typeof(((*((struct hlist_node **)((n)->pprev))))) *)&(((*((struct hlist_node **)((n)->pprev))))) = ((typeof((*((struct hlist_node **)((n)->pprev)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_188(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_node **)((n)->pprev)))) == sizeof(char) || sizeof(*&(*((struct hlist_node **)((n)->pprev)))) == sizeof(short) || sizeof(*&(*((struct hlist_node **)((n)->pprev)))) == sizeof(int) || sizeof(*&(*((struct hlist_node **)((n)->pprev)))) == sizeof(long)) || sizeof(*&(*((struct hlist_node **)((n)->pprev)))) == sizeof(long long))) __compiletime_assert_188(); } while (0); do { *(volatile typeof(*&(*((struct hlist_node **)((n)->pprev)))) *)&(*&(*((struct hlist_node **)((n)->pprev)))) = ((typeof(*((typeof((*((struct hlist_node **)((n)->pprev)))))_r_a_p__v)) *)((typeof((*((struct hlist_node **)((n)->pprev)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_189(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->pprev) == sizeof(char) || sizeof(next->pprev) == sizeof(short) || sizeof(next->pprev) == sizeof(int) || sizeof(next->pprev) == sizeof(long)) || sizeof(next->pprev) == sizeof(long long))) __compiletime_assert_189(); } while (0); do { *(volatile typeof(next->pprev) *)&(next->pprev) = (&n->next); } while (0); } while (0);
}
# 678 "../include/linux/rculist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_add_behind_rcu(struct hlist_node *n,
     struct hlist_node *prev)
{
 n->next = prev->next;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_190(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_190(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&prev->next); } while (0); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_191(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_node **)(&(prev)->next))))) == sizeof(char) || sizeof(((*((struct hlist_node **)(&(prev)->next))))) == sizeof(short) || sizeof(((*((struct hlist_node **)(&(prev)->next))))) == sizeof(int) || sizeof(((*((struct hlist_node **)(&(prev)->next))))) == sizeof(long)) || sizeof(((*((struct hlist_node **)(&(prev)->next))))) == sizeof(long long))) __compiletime_assert_191(); } while (0); do { *(volatile typeof(((*((struct hlist_node **)(&(prev)->next))))) *)&(((*((struct hlist_node **)(&(prev)->next))))) = ((typeof((*((struct hlist_node **)(&(prev)->next)))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_192(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_node **)(&(prev)->next)))) == sizeof(char) || sizeof(*&(*((struct hlist_node **)(&(prev)->next)))) == sizeof(short) || sizeof(*&(*((struct hlist_node **)(&(prev)->next)))) == sizeof(int) || sizeof(*&(*((struct hlist_node **)(&(prev)->next)))) == sizeof(long)) || sizeof(*&(*((struct hlist_node **)(&(prev)->next)))) == sizeof(long long))) __compiletime_assert_192(); } while (0); do { *(volatile typeof(*&(*((struct hlist_node **)(&(prev)->next)))) *)&(*&(*((struct hlist_node **)(&(prev)->next)))) = ((typeof(*((typeof((*((struct hlist_node **)(&(prev)->next)))))_r_a_p__v)) *)((typeof((*((struct hlist_node **)(&(prev)->next)))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 if (n->next)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_193(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next->pprev) == sizeof(char) || sizeof(n->next->pprev) == sizeof(short) || sizeof(n->next->pprev) == sizeof(int) || sizeof(n->next->pprev) == sizeof(long)) || sizeof(n->next->pprev) == sizeof(long long))) __compiletime_assert_193(); } while (0); do { *(volatile typeof(n->next->pprev) *)&(n->next->pprev) = (&n->next); } while (0); } while (0);
}
# 9 "../include/linux/dcache.h" 2
# 1 "../include/linux/rculist_bl.h" 1







# 1 "../include/linux/list_bl.h" 1





# 1 "../include/linux/bit_spinlock.h" 1
# 16 "../include/linux/bit_spinlock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bit_spin_lock(int bitnum, unsigned long *addr)
{







 __asm__ __volatile__("": : :"memory");

 while (__builtin_expect(!!(test_and_set_bit_lock(bitnum, addr)), 0)) {
  __asm__ __volatile__("": : :"memory");
  do {
   __vmyield();
  } while (((__builtin_constant_p(bitnum) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? const_test_bit(bitnum, addr) : arch_test_bit(bitnum, addr)));
  __asm__ __volatile__("": : :"memory");
 }

 (void)0;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bit_spin_trylock(int bitnum, unsigned long *addr)
{
 __asm__ __volatile__("": : :"memory");

 if (__builtin_expect(!!(test_and_set_bit_lock(bitnum, addr)), 0)) {
  __asm__ __volatile__("": : :"memory");
  return 0;
 }

 (void)0;
 return 1;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bit_spin_unlock(int bitnum, unsigned long *addr)
{

 do { if (__builtin_expect(!!(!((__builtin_constant_p(bitnum) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? const_test_bit(bitnum, addr) : arch_test_bit(bitnum, addr))), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/bit_spinlock.h", 60, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);


 clear_bit_unlock(bitnum, addr);

 __asm__ __volatile__("": : :"memory");
 (void)0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __bit_spin_unlock(int bitnum, unsigned long *addr)
{

 do { if (__builtin_expect(!!(!((__builtin_constant_p(bitnum) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? const_test_bit(bitnum, addr) : arch_test_bit(bitnum, addr))), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/bit_spinlock.h", 77, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);


 __clear_bit_unlock(bitnum, addr);

 __asm__ __volatile__("": : :"memory");
 (void)0;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bit_spin_is_locked(int bitnum, unsigned long *addr)
{

 return ((__builtin_constant_p(bitnum) && __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)((void *)0)) && (uintptr_t)(addr) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(addr))) ? const_test_bit(bitnum, addr) : arch_test_bit(bitnum, addr));





}
# 7 "../include/linux/list_bl.h" 2
# 34 "../include/linux/list_bl.h"
struct hlist_bl_head {
 struct hlist_bl_node *first;
};

struct hlist_bl_node {
 struct hlist_bl_node *next, **pprev;
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
{
 h->next = ((void *)0);
 h->pprev = ((void *)0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hlist_bl_unhashed(const struct hlist_bl_node *h)
{
 return !h->pprev;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct hlist_bl_node *hlist_bl_first(struct hlist_bl_head *h)
{
 return (struct hlist_bl_node *)
  ((unsigned long)h->first & ~1UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_set_first(struct hlist_bl_head *h,
     struct hlist_bl_node *n)
{
                                                    ;

                        ;
 h->first = (struct hlist_bl_node *)((unsigned long)n | 1UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hlist_bl_empty(const struct hlist_bl_head *h)
{
 return !((unsigned long)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_194(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(h->first) == sizeof(char) || sizeof(h->first) == sizeof(short) || sizeof(h->first) == sizeof(int) || sizeof(h->first) == sizeof(long)) || sizeof(h->first) == sizeof(long long))) __compiletime_assert_194(); } while (0); (*(const volatile typeof( _Generic((h->first), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (h->first))) *)&(h->first)); }) & ~1UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_add_head(struct hlist_bl_node *n,
     struct hlist_bl_head *h)
{
 struct hlist_bl_node *first = hlist_bl_first(h);

 n->next = first;
 if (first)
  first->pprev = &n->next;
 n->pprev = &h->first;
 hlist_bl_set_first(h, n);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_add_before(struct hlist_bl_node *n,
           struct hlist_bl_node *next)
{
 struct hlist_bl_node **pprev = next->pprev;

 n->pprev = pprev;
 n->next = next;
 next->pprev = &n->next;


 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_195(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pprev) == sizeof(char) || sizeof(*pprev) == sizeof(short) || sizeof(*pprev) == sizeof(int) || sizeof(*pprev) == sizeof(long)) || sizeof(*pprev) == sizeof(long long))) __compiletime_assert_195(); } while (0); do { *(volatile typeof(*pprev) *)&(*pprev) = ((struct hlist_bl_node *) ((uintptr_t)n | ((uintptr_t)*pprev & 1UL))); } while (0); } while (0);


}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_add_behind(struct hlist_bl_node *n,
           struct hlist_bl_node *prev)
{
 n->next = prev->next;
 n->pprev = &prev->next;
 prev->next = n;

 if (n->next)
  n->next->pprev = &n->next;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __hlist_bl_del(struct hlist_bl_node *n)
{
 struct hlist_bl_node *next = n->next;
 struct hlist_bl_node **pprev = n->pprev;

                                                    ;


 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_196(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pprev) == sizeof(char) || sizeof(*pprev) == sizeof(short) || sizeof(*pprev) == sizeof(int) || sizeof(*pprev) == sizeof(long)) || sizeof(*pprev) == sizeof(long long))) __compiletime_assert_196(); } while (0); do { *(volatile typeof(*pprev) *)&(*pprev) = ((struct hlist_bl_node *) ((unsigned long)next | ((unsigned long)*pprev & 1UL))); } while (0); } while (0);



 if (next)
  next->pprev = pprev;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_del(struct hlist_bl_node *n)
{
 __hlist_bl_del(n);
 n->next = ((void *) 0x100 + 0);
 n->pprev = ((void *) 0x122 + 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_del_init(struct hlist_bl_node *n)
{
 if (!hlist_bl_unhashed(n)) {
  __hlist_bl_del(n);
  INIT_HLIST_BL_NODE(n);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_lock(struct hlist_bl_head *b)
{
 bit_spin_lock(0, (unsigned long *)b);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_unlock(struct hlist_bl_head *b)
{
 __bit_spin_unlock(0, (unsigned long *)b);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hlist_bl_is_locked(struct hlist_bl_head *b)
{
 return bit_spin_is_locked(0, (unsigned long *)b);
}
# 9 "../include/linux/rculist_bl.h" 2


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_set_first_rcu(struct hlist_bl_head *h,
     struct hlist_bl_node *n)
{
                                                    ;

                        ;
 do { uintptr_t _r_a_p__v = (uintptr_t)((struct hlist_bl_node *)((unsigned long)n | 1UL)); ; if (__builtin_constant_p((struct hlist_bl_node *)((unsigned long)n | 1UL)) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_197(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((h->first)) == sizeof(char) || sizeof((h->first)) == sizeof(short) || sizeof((h->first)) == sizeof(int) || sizeof((h->first)) == sizeof(long)) || sizeof((h->first)) == sizeof(long long))) __compiletime_assert_197(); } while (0); do { *(volatile typeof((h->first)) *)&((h->first)) = ((typeof(h->first))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_198(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&h->first) == sizeof(char) || sizeof(*&h->first) == sizeof(short) || sizeof(*&h->first) == sizeof(int) || sizeof(*&h->first) == sizeof(long)) || sizeof(*&h->first) == sizeof(long long))) __compiletime_assert_198(); } while (0); do { *(volatile typeof(*&h->first) *)&(*&h->first) = ((typeof(*((typeof(h->first))_r_a_p__v)) *)((typeof(h->first))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct hlist_bl_node *hlist_bl_first_rcu(struct hlist_bl_head *h)
{
 return (struct hlist_bl_node *)
  ((unsigned long)({ typeof(*(h->first)) *__UNIQUE_ID_rcu199 = (typeof(*(h->first)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_200(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((h->first)) == sizeof(char) || sizeof((h->first)) == sizeof(short) || sizeof((h->first)) == sizeof(int) || sizeof((h->first)) == sizeof(long)) || sizeof((h->first)) == sizeof(long long))) __compiletime_assert_200(); } while (0); (*(const volatile typeof( _Generic(((h->first)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((h->first)))) *)&((h->first))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((hlist_bl_is_locked(h)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rculist_bl.h", 24, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(h->first)) *)(__UNIQUE_ID_rcu199)); }) & ~1UL);
}
# 46 "../include/linux/rculist_bl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_del_rcu(struct hlist_bl_node *n)
{
 __hlist_bl_del(n);
 n->pprev = ((void *) 0x122 + 0);
}
# 71 "../include/linux/rculist_bl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_bl_add_head_rcu(struct hlist_bl_node *n,
     struct hlist_bl_head *h)
{
 struct hlist_bl_node *first;


 first = hlist_bl_first(h);

 n->next = first;
 if (first)
  first->pprev = &n->next;
 n->pprev = &h->first;


 hlist_bl_set_first_rcu(h, n);
}
# 10 "../include/linux/dcache.h" 2




# 1 "../include/linux/lockref.h" 1
# 25 "../include/linux/lockref.h"
struct lockref {
 union {



  struct {
   spinlock_t lock;
   int count;
  };
 };
};

extern void lockref_get(struct lockref *);
extern int lockref_put_return(struct lockref *);
extern int lockref_get_not_zero(struct lockref *);
extern int lockref_put_not_zero(struct lockref *);
extern int lockref_put_or_lock(struct lockref *);

extern void lockref_mark_dead(struct lockref *);
extern int lockref_get_not_dead(struct lockref *);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __lockref_is_dead(const struct lockref *l)
{
 return ((int)l->count < 0);
}
# 15 "../include/linux/dcache.h" 2
# 1 "../include/linux/stringhash.h" 1






# 1 "../include/linux/hash.h" 1





# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 7 "../include/linux/hash.h" 2
# 60 "../include/linux/hash.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __hash_32_generic(u32 val)
{
 return val * 0x61C88647;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 hash_32(u32 val, unsigned int bits)
{

 return __hash_32_generic(val) >> (32 - bits);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32 hash_64_generic(u64 val, unsigned int bits)
{





 return hash_32((u32)val ^ __hash_32_generic(val >> 32), bits);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 hash_ptr(const void *ptr, unsigned int bits)
{
 return hash_32((unsigned long)ptr, bits);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 hash32_ptr(const void *ptr)
{
 unsigned long val = (unsigned long)ptr;




 return (u32)val;
}
# 8 "../include/linux/stringhash.h" 2
# 42 "../include/linux/stringhash.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
partial_name_hash(unsigned long c, unsigned long prevhash)
{
 return (prevhash + (c << 4) + (c >> 4)) * 11;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int end_name_hash(unsigned long hash)
{
 return hash_32(hash, 32);
}
# 66 "../include/linux/stringhash.h"
extern unsigned int __attribute__((__pure__)) full_name_hash(const void *salt, const char *, unsigned int);
# 77 "../include/linux/stringhash.h"
extern u64 __attribute__((__pure__)) hashlen_string(const void *salt, const char *name);
# 16 "../include/linux/dcache.h" 2


struct path;
struct file;
struct vfsmount;
# 49 "../include/linux/dcache.h"
struct qstr {
 union {
  struct {
   u32 hash; u32 len;
  };
  u64 hash_len;
 };
 const unsigned char *name;
};



extern const struct qstr empty_name;
extern const struct qstr slash_name;
extern const struct qstr dotdot_name;
# 82 "../include/linux/dcache.h"
struct dentry {

 unsigned int d_flags;
 seqcount_spinlock_t d_seq;
 struct hlist_bl_node d_hash;
 struct dentry *d_parent;
 struct qstr d_name;
 struct inode *d_inode;

 unsigned char d_iname[44];



 const struct dentry_operations *d_op;
 struct super_block *d_sb;
 unsigned long d_time;
 void *d_fsdata;

 struct lockref d_lockref;




 union {
  struct list_head d_lru;
  wait_queue_head_t *d_wait;
 };
 struct hlist_node d_sib;
 struct hlist_head d_children;



 union {
  struct hlist_node d_alias;
  struct hlist_bl_node d_in_lookup_hash;
   struct callback_head d_rcu;
 } d_u;
};







enum dentry_d_lock_class
{
 DENTRY_D_LOCK_NORMAL,
 DENTRY_D_LOCK_NESTED
};

enum d_real_type {
 D_REAL_DATA,
 D_REAL_METADATA,
};

struct dentry_operations {
 int (*d_revalidate)(struct dentry *, unsigned int);
 int (*d_weak_revalidate)(struct dentry *, unsigned int);
 int (*d_hash)(const struct dentry *, struct qstr *);
 int (*d_compare)(const struct dentry *,
   unsigned int, const char *, const struct qstr *);
 int (*d_delete)(const struct dentry *);
 int (*d_init)(struct dentry *);
 void (*d_release)(struct dentry *);
 void (*d_prune)(struct dentry *);
 void (*d_iput)(struct dentry *, struct inode *);
 char *(*d_dname)(struct dentry *, char *, int);
 struct vfsmount *(*d_automount)(struct path *);
 int (*d_manage)(const struct path *, bool);
 struct dentry *(*d_real)(struct dentry *, enum d_real_type type);
} __attribute__((__aligned__((1 << (5)))));
# 223 "../include/linux/dcache.h"
extern seqlock_t rename_lock;




extern void d_instantiate(struct dentry *, struct inode *);
extern void d_instantiate_new(struct dentry *, struct inode *);
extern void __d_drop(struct dentry *dentry);
extern void d_drop(struct dentry *dentry);
extern void d_delete(struct dentry *);
extern void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op);


extern struct dentry * d_alloc(struct dentry *, const struct qstr *);
extern struct dentry * d_alloc_anon(struct super_block *);
extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
     wait_queue_head_t *);
extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
extern bool d_same_name(const struct dentry *dentry, const struct dentry *parent,
   const struct qstr *name);
extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
extern struct dentry *d_find_any_alias(struct inode *inode);
extern struct dentry * d_obtain_alias(struct inode *);
extern struct dentry * d_obtain_root(struct inode *);
extern void shrink_dcache_sb(struct super_block *);
extern void shrink_dcache_parent(struct dentry *);
extern void d_invalidate(struct dentry *);


extern struct dentry * d_make_root(struct inode *);

extern void d_mark_tmpfile(struct file *, struct inode *);
extern void d_tmpfile(struct file *, struct inode *);

extern struct dentry *d_find_alias(struct inode *);
extern void d_prune_aliases(struct inode *);

extern struct dentry *d_find_alias_rcu(struct inode *);


extern int path_has_submounts(const struct path *);




extern void d_rehash(struct dentry *);

extern void d_add(struct dentry *, struct inode *);


extern void d_move(struct dentry *, struct dentry *);
extern void d_exchange(struct dentry *, struct dentry *);
extern struct dentry *d_ancestor(struct dentry *, struct dentry *);

extern struct dentry *d_lookup(const struct dentry *, const struct qstr *);
extern struct dentry *d_hash_and_lookup(struct dentry *, struct qstr *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned d_count(const struct dentry *dentry)
{
 return dentry->d_lockref.count;
}

ino_t d_parent_ino(struct dentry *dentry);




extern __attribute__((__format__(printf, 3, 4)))
char *dynamic_dname(char *, int, const char *, ...);

extern char *__d_path(const struct path *, const struct path *, char *, int);
extern char *d_absolute_path(const struct path *, char *, int);
extern char *d_path(const struct path *, char *, int);
extern char *dentry_path_raw(const struct dentry *, char *, int);
extern char *dentry_path(const struct dentry *, char *, int);
# 312 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *dget_dlock(struct dentry *dentry)
{
 dentry->d_lockref.count++;
 return dentry;
}
# 337 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *dget(struct dentry *dentry)
{
 if (dentry)
  lockref_get(&dentry->d_lockref);
 return dentry;
}

extern struct dentry *dget_parent(struct dentry *dentry);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int d_unhashed(const struct dentry *dentry)
{
 return hlist_bl_unhashed(&dentry->d_hash);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int d_unlinked(const struct dentry *dentry)
{
 return d_unhashed(dentry) && !((dentry) == (dentry)->d_parent);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cant_mount(const struct dentry *dentry)
{
 return (dentry->d_flags & ((((1UL))) << (8)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dont_mount(struct dentry *dentry)
{
 spin_lock(&dentry->d_lockref.lock);
 dentry->d_flags |= ((((1UL))) << (8));
 spin_unlock(&dentry->d_lockref.lock);
}

extern void __d_lookup_unhash_wake(struct dentry *dentry);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int d_in_lookup(const struct dentry *dentry)
{
 return dentry->d_flags & ((((1UL))) << (28));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void d_lookup_done(struct dentry *dentry)
{
 if (__builtin_expect(!!(d_in_lookup(dentry)), 0))
  __d_lookup_unhash_wake(dentry);
}

extern void dput(struct dentry *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_managed(const struct dentry *dentry)
{
 return dentry->d_flags & (((((1UL))) << (16))|((((1UL))) << (17))|((((1UL))) << (18)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_mountpoint(const struct dentry *dentry)
{
 return dentry->d_flags & ((((1UL))) << (16));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned __d_entry_type(const struct dentry *dentry)
{
 return dentry->d_flags & (7 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_miss(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (0 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_whiteout(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (1 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_can_lookup(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (2 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_autodir(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (3 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_dir(const struct dentry *dentry)
{
 return d_can_lookup(dentry) || d_is_autodir(dentry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_symlink(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (6 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_reg(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (4 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_special(const struct dentry *dentry)
{
 return __d_entry_type(dentry) == (5 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_file(const struct dentry *dentry)
{
 return d_is_reg(dentry) || d_is_special(dentry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_negative(const struct dentry *dentry)
{

 return d_is_miss(dentry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_flags_negative(unsigned flags)
{
 return (flags & (7 << 20)) == (0 << 20);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_is_positive(const struct dentry *dentry)
{
 return !d_is_negative(dentry);
}
# 483 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_really_is_negative(const struct dentry *dentry)
{
 return dentry->d_inode == ((void *)0);
}
# 501 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool d_really_is_positive(const struct dentry *dentry)
{
 return dentry->d_inode != ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int simple_positive(const struct dentry *dentry)
{
 return d_really_is_positive(dentry) && !d_unhashed(dentry);
}

extern int sysctl_vfs_cache_pressure;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vfs_pressure_ratio(unsigned long val)
{
 return ({ typeof(val) x_ = (val); typeof(sysctl_vfs_cache_pressure) n_ = (sysctl_vfs_cache_pressure); typeof(100) d_ = (100); typeof(x_) q = x_ / d_; typeof(x_) r = x_ % d_; q * n_ + r * n_ / d_; });
}
# 525 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *d_inode(const struct dentry *dentry)
{
 return dentry->d_inode;
}
# 537 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *d_inode_rcu(const struct dentry *dentry)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_201(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dentry->d_inode) == sizeof(char) || sizeof(dentry->d_inode) == sizeof(short) || sizeof(dentry->d_inode) == sizeof(int) || sizeof(dentry->d_inode) == sizeof(long)) || sizeof(dentry->d_inode) == sizeof(long long))) __compiletime_assert_201(); } while (0); (*(const volatile typeof( _Generic((dentry->d_inode), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dentry->d_inode))) *)&(dentry->d_inode)); });
}
# 552 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *d_backing_inode(const struct dentry *upper)
{
 struct inode *inode = upper->d_inode;

 return inode;
}
# 569 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *d_real(struct dentry *dentry, enum d_real_type type)
{
 if (__builtin_expect(!!(dentry->d_flags & ((((1UL))) << (26))), 0))
  return dentry->d_op->d_real(dentry, type);
 else
  return dentry;
}
# 584 "../include/linux/dcache.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *d_real_inode(const struct dentry *dentry)
{

 return d_inode(d_real((struct dentry *) dentry, D_REAL_DATA));
}

struct name_snapshot {
 struct qstr name;
 unsigned char inline_name[44];
};
void take_dentry_name_snapshot(struct name_snapshot *, struct dentry *);
void release_dentry_name_snapshot(struct name_snapshot *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *d_first_child(const struct dentry *dentry)
{
 return ({ typeof(dentry->d_children.first) ____ptr = (dentry->d_children.first); ____ptr ? ({ void *__mptr = (void *)(____ptr); _Static_assert(__builtin_types_compatible_p(typeof(*(____ptr)), typeof(((struct dentry *)0)->d_sib)) || __builtin_types_compatible_p(typeof(*(____ptr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct dentry *)(__mptr - __builtin_offsetof(struct dentry, d_sib))); }) : ((void *)0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *d_next_sibling(const struct dentry *dentry)
{
 return ({ typeof(dentry->d_sib.next) ____ptr = (dentry->d_sib.next); ____ptr ? ({ void *__mptr = (void *)(____ptr); _Static_assert(__builtin_types_compatible_p(typeof(*(____ptr)), typeof(((struct dentry *)0)->d_sib)) || __builtin_types_compatible_p(typeof(*(____ptr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct dentry *)(__mptr - __builtin_offsetof(struct dentry, d_sib))); }) : ((void *)0); });
}
# 9 "../include/linux/fs.h" 2
# 1 "../include/linux/path.h" 1




struct dentry;
struct vfsmount;

struct path {
 struct vfsmount *mnt;
 struct dentry *dentry;
} ;

extern void path_get(const struct path *);
extern void path_put(const struct path *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int path_equal(const struct path *path1, const struct path *path2)
{
 return path1->mnt == path2->mnt && path1->dentry == path2->dentry;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void path_put_init(struct path *path)
{
 path_put(path);
 *path = (struct path) { };
}
# 10 "../include/linux/fs.h" 2



# 1 "../include/linux/list_lru.h" 1
# 13 "../include/linux/list_lru.h"
# 1 "../include/linux/shrinker.h" 1
# 16 "../include/linux/shrinker.h"
struct shrinker_info_unit {
 atomic_long_t nr_deferred[32];
 unsigned long map[(((32) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
};

struct shrinker_info {
 struct callback_head rcu;
 int map_nr_max;
 struct shrinker_info_unit *unit[];
};
# 34 "../include/linux/shrinker.h"
struct shrink_control {
 gfp_t gfp_mask;


 int nid;






 unsigned long nr_to_scan;






 unsigned long nr_scanned;


 struct mem_cgroup *memcg;
};
# 82 "../include/linux/shrinker.h"
struct shrinker {
 unsigned long (*count_objects)(struct shrinker *,
           struct shrink_control *sc);
 unsigned long (*scan_objects)(struct shrinker *,
          struct shrink_control *sc);

 long batch;
 int seeks;
 unsigned flags;
# 99 "../include/linux/shrinker.h"
 refcount_t refcount;
 struct completion done;
 struct callback_head rcu;

 void *private_data;


 struct list_head list;





 int debugfs_id;
 const char *name;
 struct dentry *debugfs_entry;


 atomic_long_t *nr_deferred;
};
# 134 "../include/linux/shrinker.h"
__attribute__((__format__(printf, 2, 3)))
struct shrinker *shrinker_alloc(unsigned int flags, const char *fmt, ...);
void shrinker_register(struct shrinker *shrinker);
void shrinker_free(struct shrinker *shrinker);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool shrinker_try_get(struct shrinker *shrinker)
{
 return refcount_inc_not_zero(&shrinker->refcount);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void shrinker_put(struct shrinker *shrinker)
{
 if (refcount_dec_and_test(&shrinker->refcount))
  complete(&shrinker->done);
}


extern int __attribute__((__format__(printf, 2, 3))) shrinker_debugfs_rename(struct shrinker *shrinker,
        const char *fmt, ...);
# 14 "../include/linux/list_lru.h" 2


struct mem_cgroup;


enum lru_status {
 LRU_REMOVED,
 LRU_REMOVED_RETRY,

 LRU_ROTATE,
 LRU_SKIP,
 LRU_RETRY,

 LRU_STOP,

};

struct list_lru_one {
 struct list_head list;

 long nr_items;
};

struct list_lru_memcg {
 struct callback_head rcu;

 struct list_lru_one node[];
};

struct list_lru_node {

 spinlock_t lock;

 struct list_lru_one lru;
 long nr_items;
} ;

struct list_lru {
 struct list_lru_node *node;






};

void list_lru_destroy(struct list_lru *lru);
int __list_lru_init(struct list_lru *lru, bool memcg_aware,
      struct lock_class_key *key, struct shrinker *shrinker);






int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
    gfp_t gfp);
void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
# 92 "../include/linux/list_lru.h"
bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
      struct mem_cgroup *memcg);
# 106 "../include/linux/list_lru.h"
bool list_lru_add_obj(struct list_lru *lru, struct list_head *item);
# 121 "../include/linux/list_lru.h"
bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
      struct mem_cgroup *memcg);
# 135 "../include/linux/list_lru.h"
bool list_lru_del_obj(struct list_lru *lru, struct list_head *item);
# 149 "../include/linux/list_lru.h"
unsigned long list_lru_count_one(struct list_lru *lru,
     int nid, struct mem_cgroup *memcg);
unsigned long list_lru_count_node(struct list_lru *lru, int nid);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long list_lru_shrink_count(struct list_lru *lru,
        struct shrink_control *sc)
{
 return list_lru_count_one(lru, sc->nid, sc->memcg);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long list_lru_count(struct list_lru *lru)
{
 long count = 0;
 int nid;

 for ( (nid) = 0; (nid) == 0; (nid) = 1)
  count += list_lru_count_node(lru, nid);

 return count;
}

void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
      struct list_head *head);

typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
  struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
# 199 "../include/linux/list_lru.h"
unsigned long list_lru_walk_one(struct list_lru *lru,
    int nid, struct mem_cgroup *memcg,
    list_lru_walk_cb isolate, void *cb_arg,
    unsigned long *nr_to_walk);
# 216 "../include/linux/list_lru.h"
unsigned long list_lru_walk_one_irq(struct list_lru *lru,
        int nid, struct mem_cgroup *memcg,
        list_lru_walk_cb isolate, void *cb_arg,
        unsigned long *nr_to_walk);
unsigned long list_lru_walk_node(struct list_lru *lru, int nid,
     list_lru_walk_cb isolate, void *cb_arg,
     unsigned long *nr_to_walk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
       list_lru_walk_cb isolate, void *cb_arg)
{
 return list_lru_walk_one(lru, sc->nid, sc->memcg, isolate, cb_arg,
     &sc->nr_to_scan);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
list_lru_shrink_walk_irq(struct list_lru *lru, struct shrink_control *sc,
    list_lru_walk_cb isolate, void *cb_arg)
{
 return list_lru_walk_one_irq(lru, sc->nid, sc->memcg, isolate, cb_arg,
         &sc->nr_to_scan);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate,
       void *cb_arg, unsigned long nr_to_walk)
{
 long isolated = 0;
 int nid;

 for ( (nid) = 0; (nid) == 0; (nid) = 1) {
  isolated += list_lru_walk_node(lru, nid, isolate,
            cb_arg, &nr_to_walk);
  if (nr_to_walk <= 0)
   break;
 }
 return isolated;
}
# 14 "../include/linux/fs.h" 2





# 1 "../include/linux/pid.h" 1
# 50 "../include/linux/pid.h"
struct upid {
 int nr;
 struct pid_namespace *ns;
};

struct pid
{
 refcount_t count;
 unsigned int level;
 spinlock_t lock;
 struct dentry *stashed;
 u64 ino;

 struct hlist_head tasks[PIDTYPE_MAX];
 struct hlist_head inodes;

 wait_queue_head_t wait_pidfd;
 struct callback_head rcu;
 struct upid numbers[];
};

extern struct pid init_struct_pid;

struct file;

struct pid *pidfd_pid(const struct file *file);
struct pid *pidfd_get_pid(unsigned int fd, unsigned int *flags);
struct task_struct *pidfd_get_task(int pidfd, unsigned int *flags);
int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret);
void do_notify_pidfd(struct task_struct *task);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid *get_pid(struct pid *pid)
{
 if (pid)
  refcount_inc(&pid->count);
 return pid;
}

extern void put_pid(struct pid *pid);
extern struct task_struct *pid_task(struct pid *pid, enum pid_type);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pid_has_task(struct pid *pid, enum pid_type type)
{
 return !hlist_empty(&pid->tasks[type]);
}
extern struct task_struct *get_pid_task(struct pid *pid, enum pid_type);

extern struct pid *get_task_pid(struct task_struct *task, enum pid_type type);




extern void attach_pid(struct task_struct *task, enum pid_type);
extern void detach_pid(struct task_struct *task, enum pid_type);
extern void change_pid(struct task_struct *task, enum pid_type,
   struct pid *pid);
extern void exchange_tids(struct task_struct *task, struct task_struct *old);
extern void transfer_pid(struct task_struct *old, struct task_struct *new,
    enum pid_type);

extern int pid_max;
extern int pid_max_min, pid_max_max;
# 121 "../include/linux/pid.h"
extern struct pid *find_pid_ns(int nr, struct pid_namespace *ns);
extern struct pid *find_vpid(int nr);




extern struct pid *find_get_pid(int nr);
extern struct pid *find_ge_pid(int nr, struct pid_namespace *);

extern struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid,
        size_t set_tid_size);
extern void free_pid(struct pid *pid);
extern void disable_pid_allocation(struct pid_namespace *ns);
# 145 "../include/linux/pid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid_namespace *ns_of_pid(struct pid *pid)
{
 struct pid_namespace *ns = ((void *)0);
 if (pid)
  ns = pid->numbers[pid->level].ns;
 return ns;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_child_reaper(struct pid *pid)
{
 return pid->numbers[pid->level].nr == 1;
}
# 175 "../include/linux/pid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t pid_nr(struct pid *pid)
{
 pid_t nr = 0;
 if (pid)
  nr = pid->numbers[0].nr;
 return nr;
}

pid_t pid_nr_ns(struct pid *pid, struct pid_namespace *ns);
pid_t pid_vnr(struct pid *pid);
# 212 "../include/linux/pid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid *task_pid(struct task_struct *task)
{
 return task->thread_pid;
}
# 228 "../include/linux/pid.h"
pid_t __task_pid_nr_ns(struct task_struct *task, enum pid_type type, struct pid_namespace *ns);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pid_nr(struct task_struct *tsk)
{
 return tsk->pid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_PID, ns);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pid_vnr(struct task_struct *tsk)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_PID, ((void *)0));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_tgid_nr(struct task_struct *tsk)
{
 return tsk->tgid;
}
# 261 "../include/linux/pid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pid_alive(const struct task_struct *p)
{
 return p->thread_pid != ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pgrp_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_PGID, ns);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pgrp_vnr(struct task_struct *tsk)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_PGID, ((void *)0));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_session_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_SID, ns);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_session_vnr(struct task_struct *tsk)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_SID, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_TGID, ns);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_tgid_vnr(struct task_struct *tsk)
{
 return __task_pid_nr_ns(tsk, PIDTYPE_TGID, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns)
{
 pid_t pid = 0;

 rcu_read_lock();
 if (pid_alive(tsk))
  pid = task_tgid_nr_ns(({ typeof(*(tsk->real_parent)) *__UNIQUE_ID_rcu202 = (typeof(*(tsk->real_parent)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_203(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((tsk->real_parent)) == sizeof(char) || sizeof((tsk->real_parent)) == sizeof(short) || sizeof((tsk->real_parent)) == sizeof(int) || sizeof((tsk->real_parent)) == sizeof(long)) || sizeof((tsk->real_parent)) == sizeof(long long))) __compiletime_assert_203(); } while (0); (*(const volatile typeof( _Generic(((tsk->real_parent)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((tsk->real_parent)))) *)&((tsk->real_parent))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/pid.h", 303, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(tsk->real_parent)) *)(__UNIQUE_ID_rcu202)); }), ns);
 rcu_read_unlock();

 return pid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_ppid_nr(const struct task_struct *tsk)
{
 return task_ppid_nr_ns(tsk, &init_pid_ns);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pid_t task_pgrp_nr(struct task_struct *tsk)
{
 return task_pgrp_nr_ns(tsk, &init_pid_ns);
}
# 329 "../include/linux/pid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_global_init(struct task_struct *tsk)
{
 return task_tgid_nr(tsk) == 1;
}
# 20 "../include/linux/fs.h" 2




# 1 "../include/linux/capability.h" 1
# 16 "../include/linux/capability.h"
# 1 "../include/uapi/linux/capability.h" 1
# 39 "../include/uapi/linux/capability.h"
typedef struct __user_cap_header_struct {
 __u32 version;
 int pid;
} *cap_user_header_t;

struct __user_cap_data_struct {
        __u32 effective;
        __u32 permitted;
        __u32 inheritable;
};
typedef struct __user_cap_data_struct *cap_user_data_t;
# 73 "../include/uapi/linux/capability.h"
struct vfs_cap_data {
 __le32 magic_etc;
 struct {
  __le32 permitted;
  __le32 inheritable;
 } data[2];
};




struct vfs_ns_cap_data {
 __le32 magic_etc;
 struct {
  __le32 permitted;
  __le32 inheritable;
 } data[2];
 __le32 rootid;
};
# 17 "../include/linux/capability.h" 2





extern int file_caps_enabled;

typedef struct { u64 val; } kernel_cap_t;


struct cpu_vfs_cap_data {
 __u32 magic_etc;
 kuid_t rootid;
 kernel_cap_t permitted;
 kernel_cap_t inheritable;
};




struct file;
struct inode;
struct dentry;
struct task_struct;
struct user_namespace;
struct mnt_idmap;
# 77 "../include/linux/capability.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_combine(const kernel_cap_t a,
           const kernel_cap_t b)
{
 return (kernel_cap_t) { a.val | b.val };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_intersect(const kernel_cap_t a,
      const kernel_cap_t b)
{
 return (kernel_cap_t) { a.val & b.val };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_drop(const kernel_cap_t a,
        const kernel_cap_t drop)
{
 return (kernel_cap_t) { a.val &~ drop.val };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cap_isclear(const kernel_cap_t a)
{
 return !a.val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cap_isidentical(const kernel_cap_t a, const kernel_cap_t b)
{
 return a.val == b.val;
}
# 112 "../include/linux/capability.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cap_issubset(const kernel_cap_t a, const kernel_cap_t set)
{
 return !(a.val & ~set.val);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_drop_fs_set(const kernel_cap_t a)
{
 return cap_drop(a, ((kernel_cap_t) { (((((1ULL))) << (0)) | ((((1ULL))) << (27)) | ((((1ULL))) << (1)) | ((((1ULL))) << (2)) | ((((1ULL))) << (3)) | ((((1ULL))) << (4)) | ((((1ULL))) << (32))) | ((((1ULL))) << (9)) }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_raise_fs_set(const kernel_cap_t a,
         const kernel_cap_t permitted)
{
 return cap_combine(a, cap_intersect(permitted, ((kernel_cap_t) { (((((1ULL))) << (0)) | ((((1ULL))) << (27)) | ((((1ULL))) << (1)) | ((((1ULL))) << (2)) | ((((1ULL))) << (3)) | ((((1ULL))) << (4)) | ((((1ULL))) << (32))) | ((((1ULL))) << (9)) })));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_drop_nfsd_set(const kernel_cap_t a)
{
 return cap_drop(a, ((kernel_cap_t) { (((((1ULL))) << (0)) | ((((1ULL))) << (27)) | ((((1ULL))) << (1)) | ((((1ULL))) << (2)) | ((((1ULL))) << (3)) | ((((1ULL))) << (4)) | ((((1ULL))) << (32))) | ((((1ULL))) << (24)) }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kernel_cap_t cap_raise_nfsd_set(const kernel_cap_t a,
           const kernel_cap_t permitted)
{
 return cap_combine(a, cap_intersect(permitted, ((kernel_cap_t) { (((((1ULL))) << (0)) | ((((1ULL))) << (27)) | ((((1ULL))) << (1)) | ((((1ULL))) << (2)) | ((((1ULL))) << (3)) | ((((1ULL))) << (4)) | ((((1ULL))) << (32))) | ((((1ULL))) << (24)) })));
}


extern bool has_capability(struct task_struct *t, int cap);
extern bool has_ns_capability(struct task_struct *t,
         struct user_namespace *ns, int cap);
extern bool has_capability_noaudit(struct task_struct *t, int cap);
extern bool has_ns_capability_noaudit(struct task_struct *t,
          struct user_namespace *ns, int cap);
extern bool capable(int cap);
extern bool ns_capable(struct user_namespace *ns, int cap);
extern bool ns_capable_noaudit(struct user_namespace *ns, int cap);
extern bool ns_capable_setid(struct user_namespace *ns, int cap);
# 188 "../include/linux/capability.h"
bool privileged_wrt_inode_uidgid(struct user_namespace *ns,
     struct mnt_idmap *idmap,
     const struct inode *inode);
bool capable_wrt_inode_uidgid(struct mnt_idmap *idmap,
         const struct inode *inode, int cap);
extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);
extern bool ptracer_capable(struct task_struct *tsk, struct user_namespace *ns);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool perfmon_capable(void)
{
 return capable(38) || capable(21);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_capable(void)
{
 return capable(39) || capable(21);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool checkpoint_restore_ns_capable(struct user_namespace *ns)
{
 return ns_capable(ns, 40) ||
  ns_capable(ns, 21);
}


int get_vfs_caps_from_disk(struct mnt_idmap *idmap,
      const struct dentry *dentry,
      struct cpu_vfs_cap_data *cpu_caps);

int cap_convert_nscap(struct mnt_idmap *idmap, struct dentry *dentry,
        const void **ivalue, size_t size);
# 25 "../include/linux/fs.h" 2
# 1 "../include/linux/semaphore.h" 1
# 15 "../include/linux/semaphore.h"
struct semaphore {
 raw_spinlock_t lock;
 unsigned int count;
 struct list_head wait_list;
};
# 37 "../include/linux/semaphore.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sema_init(struct semaphore *sem, int val)
{
 static struct lock_class_key __key;
 *sem = (struct semaphore) { .lock = (raw_spinlock_t) { .raw_lock = { 1 }, .magic = 0xdead4ead, .owner_cpu = -1, .owner = ((void *)-1L), .dep_map = { .name = "(*sem).lock", .wait_type_inner = LD_WAIT_SPIN, } }, .count = val, .wait_list = { &((*sem).wait_list), &((*sem).wait_list) }, };
 lockdep_init_map(&sem->lock.dep_map, "semaphore->lock", &__key, 0);
}

extern void down(struct semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_interruptible(struct semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_killable(struct semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_trylock(struct semaphore *sem);
extern int __attribute__((__warn_unused_result__)) down_timeout(struct semaphore *sem, long jiffies);
extern void up(struct semaphore *sem);
# 26 "../include/linux/fs.h" 2
# 1 "../include/linux/fcntl.h" 1





# 1 "../include/uapi/linux/fcntl.h" 1




# 1 "./arch/hexagon/include/generated/uapi/asm/fcntl.h" 1
# 1 "../include/uapi/asm-generic/fcntl.h" 1
# 155 "../include/uapi/asm-generic/fcntl.h"
struct f_owner_ex {
 int type;
 __kernel_pid_t pid;
};
# 195 "../include/uapi/asm-generic/fcntl.h"
struct flock {
 short l_type;
 short l_whence;
 __kernel_off_t l_start;
 __kernel_off_t l_len;
 __kernel_pid_t l_pid;






};

struct flock64 {
 short l_type;
 short l_whence;
 __kernel_loff_t l_start;
 __kernel_loff_t l_len;
 __kernel_pid_t l_pid;



};
# 2 "./arch/hexagon/include/generated/uapi/asm/fcntl.h" 2
# 6 "../include/uapi/linux/fcntl.h" 2
# 1 "../include/uapi/linux/openat2.h" 1
# 19 "../include/uapi/linux/openat2.h"
struct open_how {
 __u64 flags;
 __u64 mode;
 __u64 resolve;
};
# 7 "../include/uapi/linux/fcntl.h" 2
# 7 "../include/linux/fcntl.h" 2
# 27 "../include/linux/fs.h" 2



# 1 "../include/linux/migrate_mode.h" 1
# 11 "../include/linux/migrate_mode.h"
enum migrate_mode {
 MIGRATE_ASYNC,
 MIGRATE_SYNC_LIGHT,
 MIGRATE_SYNC,
};

enum migrate_reason {
 MR_COMPACTION,
 MR_MEMORY_FAILURE,
 MR_MEMORY_HOTPLUG,
 MR_SYSCALL,
 MR_MEMPOLICY_MBIND,
 MR_NUMA_MISPLACED,
 MR_CONTIG_RANGE,
 MR_LONGTERM_PIN,
 MR_DEMOTION,
 MR_DAMON,
 MR_TYPES
};
# 31 "../include/linux/fs.h" 2


# 1 "../include/linux/percpu-rwsem.h" 1






# 1 "../include/linux/rcuwait.h" 1





# 1 "../include/linux/sched/signal.h" 1





# 1 "../include/linux/signal.h" 1
# 10 "../include/linux/signal.h"
struct task_struct;


extern int print_fatal_signals;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_siginfo(kernel_siginfo_t *to,
    const kernel_siginfo_t *from)
{
 memcpy(to, from, sizeof(*to));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_siginfo(kernel_siginfo_t *info)
{
 memset(info, 0, sizeof(*info));
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_siginfo_to_external(siginfo_t *to,
         const kernel_siginfo_t *from)
{
 memcpy(to, from, sizeof(*from));
 memset(((char *)to) + sizeof(struct kernel_siginfo), 0,
  (sizeof(struct siginfo) - sizeof(struct kernel_siginfo)));
}

int copy_siginfo_to_user(siginfo_t *to, const kernel_siginfo_t *from);
int copy_siginfo_from_user(kernel_siginfo_t *to, const siginfo_t *from);

enum siginfo_layout {
 SIL_KILL,
 SIL_TIMER,
 SIL_POLL,
 SIL_FAULT,
 SIL_FAULT_TRAPNO,
 SIL_FAULT_MCEERR,
 SIL_FAULT_BNDERR,
 SIL_FAULT_PKUERR,
 SIL_FAULT_PERF_EVENT,
 SIL_CHLD,
 SIL_RT,
 SIL_SYS,
};

enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
# 65 "../include/linux/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigaddset(sigset_t *set, int _sig)
{
 unsigned long sig = _sig - 1;
 if ((64 / (8 * 4)) == 1)
  set->sig[0] |= 1UL << sig;
 else
  set->sig[sig / (8 * 4)] |= 1UL << (sig % (8 * 4));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigdelset(sigset_t *set, int _sig)
{
 unsigned long sig = _sig - 1;
 if ((64 / (8 * 4)) == 1)
  set->sig[0] &= ~(1UL << sig);
 else
  set->sig[sig / (8 * 4)] &= ~(1UL << (sig % (8 * 4)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sigismember(sigset_t *set, int _sig)
{
 unsigned long sig = _sig - 1;
 if ((64 / (8 * 4)) == 1)
  return 1 & (set->sig[0] >> sig);
 else
  return 1 & (set->sig[sig / (8 * 4)] >> (sig % (8 * 4)));
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sigisemptyset(sigset_t *set)
{
 switch ((64 / (8 * 4))) {
 case 4:
  return (set->sig[3] | set->sig[2] |
   set->sig[1] | set->sig[0]) == 0;
 case 2:
  return (set->sig[1] | set->sig[0]) == 0;
 case 1:
  return set->sig[0] == 0;
 default:
  do { __attribute__((__noreturn__)) extern void __compiletime_assert_204(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_204(); } while (0);
  return 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sigequalsets(const sigset_t *set1, const sigset_t *set2)
{
 switch ((64 / (8 * 4))) {
 case 4:
  return (set1->sig[3] == set2->sig[3]) &&
   (set1->sig[2] == set2->sig[2]) &&
   (set1->sig[1] == set2->sig[1]) &&
   (set1->sig[0] == set2->sig[0]);
 case 2:
  return (set1->sig[1] == set2->sig[1]) &&
   (set1->sig[0] == set2->sig[0]);
 case 1:
  return set1->sig[0] == set2->sig[0];
 }
 return 0;
}
# 157 "../include/linux/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigorsets(sigset_t *r, const sigset_t *a, const sigset_t *b) { unsigned long a0, a1, a2, a3, b0, b1, b2, b3; switch ((64 / (8 * 4))) { case 4: a3 = a->sig[3]; a2 = a->sig[2]; b3 = b->sig[3]; b2 = b->sig[2]; r->sig[3] = ((a3) | (b3)); r->sig[2] = ((a2) | (b2)); __attribute__((__fallthrough__)); case 2: a1 = a->sig[1]; b1 = b->sig[1]; r->sig[1] = ((a1) | (b1)); __attribute__((__fallthrough__)); case 1: a0 = a->sig[0]; b0 = b->sig[0]; r->sig[0] = ((a0) | (b0)); break; default: do { __attribute__((__noreturn__)) extern void __compiletime_assert_205(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_205(); } while (0); } }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigandsets(sigset_t *r, const sigset_t *a, const sigset_t *b) { unsigned long a0, a1, a2, a3, b0, b1, b2, b3; switch ((64 / (8 * 4))) { case 4: a3 = a->sig[3]; a2 = a->sig[2]; b3 = b->sig[3]; b2 = b->sig[2]; r->sig[3] = ((a3) & (b3)); r->sig[2] = ((a2) & (b2)); __attribute__((__fallthrough__)); case 2: a1 = a->sig[1]; b1 = b->sig[1]; r->sig[1] = ((a1) & (b1)); __attribute__((__fallthrough__)); case 1: a0 = a->sig[0]; b0 = b->sig[0]; r->sig[0] = ((a0) & (b0)); break; default: do { __attribute__((__noreturn__)) extern void __compiletime_assert_206(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_206(); } while (0); } }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigandnsets(sigset_t *r, const sigset_t *a, const sigset_t *b) { unsigned long a0, a1, a2, a3, b0, b1, b2, b3; switch ((64 / (8 * 4))) { case 4: a3 = a->sig[3]; a2 = a->sig[2]; b3 = b->sig[3]; b2 = b->sig[2]; r->sig[3] = ((a3) & ~(b3)); r->sig[2] = ((a2) & ~(b2)); __attribute__((__fallthrough__)); case 2: a1 = a->sig[1]; b1 = b->sig[1]; r->sig[1] = ((a1) & ~(b1)); __attribute__((__fallthrough__)); case 1: a0 = a->sig[0]; b0 = b->sig[0]; r->sig[0] = ((a0) & ~(b0)); break; default: do { __attribute__((__noreturn__)) extern void __compiletime_assert_207(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_207(); } while (0); } }
# 187 "../include/linux/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void signotset(sigset_t *set) { switch ((64 / (8 * 4))) { case 4: set->sig[3] = (~(set->sig[3])); set->sig[2] = (~(set->sig[2])); __attribute__((__fallthrough__)); case 2: set->sig[1] = (~(set->sig[1])); __attribute__((__fallthrough__)); case 1: set->sig[0] = (~(set->sig[0])); break; default: do { __attribute__((__noreturn__)) extern void __compiletime_assert_208(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_208(); } while (0); } }




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigemptyset(sigset_t *set)
{
 switch ((64 / (8 * 4))) {
 default:
  memset(set, 0, sizeof(sigset_t));
  break;
 case 2: set->sig[1] = 0;
  __attribute__((__fallthrough__));
 case 1: set->sig[0] = 0;
  break;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigfillset(sigset_t *set)
{
 switch ((64 / (8 * 4))) {
 default:
  memset(set, -1, sizeof(sigset_t));
  break;
 case 2: set->sig[1] = -1;
  __attribute__((__fallthrough__));
 case 1: set->sig[0] = -1;
  break;
 }
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigaddsetmask(sigset_t *set, unsigned long mask)
{
 set->sig[0] |= mask;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sigdelsetmask(sigset_t *set, unsigned long mask)
{
 set->sig[0] &= ~mask;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sigtestsetmask(sigset_t *set, unsigned long mask)
{
 return (set->sig[0] & mask) != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void siginitset(sigset_t *set, unsigned long mask)
{
 set->sig[0] = mask;
 switch ((64 / (8 * 4))) {
 default:
  memset(&set->sig[1], 0, sizeof(long)*((64 / (8 * 4))-1));
  break;
 case 2: set->sig[1] = 0;
  break;
 case 1: ;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void siginitsetinv(sigset_t *set, unsigned long mask)
{
 set->sig[0] = ~mask;
 switch ((64 / (8 * 4))) {
 default:
  memset(&set->sig[1], -1, sizeof(long)*((64 / (8 * 4))-1));
  break;
 case 2: set->sig[1] = -1;
  break;
 case 1: ;
 }
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_sigpending(struct sigpending *sig)
{
 sigemptyset(&sig->signal);
 INIT_LIST_HEAD(&sig->list);
}

extern void flush_sigqueue(struct sigpending *queue);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int valid_signal(unsigned long sig)
{
 return sig <= 64 ? 1 : 0;
}

struct timespec;
struct pt_regs;
enum pid_type;

extern int next_signal(struct sigpending *pending, sigset_t *mask);
extern int do_send_sig_info(int sig, struct kernel_siginfo *info,
    struct task_struct *p, enum pid_type type);
extern int group_send_sig_info(int sig, struct kernel_siginfo *info,
          struct task_struct *p, enum pid_type type);
extern int send_signal_locked(int sig, struct kernel_siginfo *info,
         struct task_struct *p, enum pid_type type);
extern int sigprocmask(int, sigset_t *, sigset_t *);
extern void set_current_blocked(sigset_t *);
extern void __set_current_blocked(const sigset_t *);
extern int show_unhandled_signals;

extern bool get_signal(struct ksignal *ksig);
extern void signal_setup_done(int failed, struct ksignal *ksig, int stepping);
extern void exit_signals(struct task_struct *tsk);
extern void kernel_sigaction(int, __sighandler_t);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void allow_signal(int sig)
{





 kernel_sigaction(sig, (( __sighandler_t)2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void allow_kernel_signal(int sig)
{





 kernel_sigaction(sig, (( __sighandler_t)3));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void disallow_signal(int sig)
{
 kernel_sigaction(sig, (( __sighandler_t)1));
}

extern struct kmem_cache *sighand_cachep;

extern bool unhandled_signal(struct task_struct *tsk, int sig);
# 455 "../include/linux/signal.h"
void signals_init(void);

int restore_altstack(const stack_t *);
int __save_altstack(stack_t *, unsigned long);
# 471 "../include/linux/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sigaltstack_size_valid(size_t size) { return true; }



struct seq_file;
extern void render_sigset_t(struct seq_file *, const char *, sigset_t *);
# 485 "../include/linux/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *arch_untagged_si_addr(void *addr,
       unsigned long sig,
       unsigned long si_code)
{
 return addr;
}
# 7 "../include/linux/sched/signal.h" 2

# 1 "../include/linux/sched/jobctl.h" 1






struct task_struct;
# 43 "../include/linux/sched/jobctl.h"
extern bool task_set_jobctl_pending(struct task_struct *task, unsigned long mask);
extern void task_clear_jobctl_trapping(struct task_struct *task);
extern void task_clear_jobctl_pending(struct task_struct *task, unsigned long mask);
# 9 "../include/linux/sched/signal.h" 2
# 1 "../include/linux/sched/task.h" 1
# 13 "../include/linux/sched/task.h"
# 1 "../include/linux/uaccess.h" 1




# 1 "../include/linux/fault-inject-usercopy.h" 1
# 18 "../include/linux/fault-inject-usercopy.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool should_fail_usercopy(void) { return false; }
# 6 "../include/linux/uaccess.h" 2


# 1 "../include/linux/nospec.h" 1
# 10 "../include/linux/nospec.h"
# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 11 "../include/linux/nospec.h" 2

struct task_struct;
# 28 "../include/linux/nospec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long array_index_mask_nospec(unsigned long index,
          unsigned long size)
{





 __asm__ ("" : "=r" (index) : "0" (index));
 return ~(long)(index | (size - 1UL - index)) >> (32 - 1);
}
# 68 "../include/linux/nospec.h"
int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which);
int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
        unsigned long ctrl);

void arch_seccomp_spec_mitigate(struct task_struct *task);
# 9 "../include/linux/uaccess.h" 2



# 1 "../arch/hexagon/include/asm/uaccess.h" 1
# 13 "../arch/hexagon/include/asm/uaccess.h"
# 1 "./arch/hexagon/include/generated/asm/sections.h" 1
# 1 "../include/asm-generic/sections.h" 1
# 35 "../include/asm-generic/sections.h"
extern char _text[], _stext[], _etext[];
extern char _data[], _sdata[], _edata[];
extern char __bss_start[], __bss_stop[];
extern char __init_begin[], __init_end[];
extern char _sinittext[], _einittext[];
extern char __start_ro_after_init[], __end_ro_after_init[];
extern char _end[];
extern char __per_cpu_load[], __per_cpu_start[], __per_cpu_end[];
extern char __kprobes_text_start[], __kprobes_text_end[];
extern char __entry_text_start[], __entry_text_end[];
extern char __start_rodata[], __end_rodata[];
extern char __irqentry_text_start[], __irqentry_text_end[];
extern char __softirqentry_text_start[], __softirqentry_text_end[];
extern char __start_once[], __end_once[];


extern char __ctors_start[], __ctors_end[];


extern char __start_opd[], __end_opd[];


extern char __noinstr_text_start[], __noinstr_text_end[];

extern const void __nosave_begin, __nosave_end;
# 70 "../include/asm-generic/sections.h"
typedef struct {
 unsigned long addr;
} func_desc_t;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool have_function_descriptors(void)
{
 return 0;
}
# 91 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool memory_contains(void *begin, void *end, void *virt,
       size_t size)
{
 return virt >= begin && virt + size <= end;
}
# 108 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool memory_intersects(void *begin, void *end, void *virt,
         size_t size)
{
 void *vend = virt + size;

 if (virt < end && vend > begin)
  return true;

 return false;
}
# 128 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool init_section_contains(void *virt, size_t size)
{
 return memory_contains(__init_begin, __init_end, virt, size);
}
# 142 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool init_section_intersects(void *virt, size_t size)
{
 return memory_intersects(__init_begin, __init_end, virt, size);
}
# 157 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_kernel_core_data(unsigned long addr)
{
 if (addr >= (unsigned long)_sdata && addr < (unsigned long)_edata)
  return true;

 if (addr >= (unsigned long)__bss_start &&
     addr < (unsigned long)__bss_stop)
  return true;

 return false;
}
# 177 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_kernel_rodata(unsigned long addr)
{
 return addr >= (unsigned long)__start_rodata &&
        addr < (unsigned long)__end_rodata;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_kernel_ro_after_init(unsigned long addr)
{
 return addr >= (unsigned long)__start_ro_after_init &&
        addr < (unsigned long)__end_ro_after_init;
}
# 196 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_kernel_inittext(unsigned long addr)
{
 return addr >= (unsigned long)_sinittext &&
        addr < (unsigned long)_einittext;
}
# 211 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __is_kernel_text(unsigned long addr)
{
 return addr >= (unsigned long)_stext &&
        addr < (unsigned long)_etext;
}
# 227 "../include/asm-generic/sections.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __is_kernel(unsigned long addr)
{
 return ((addr >= (unsigned long)_stext &&
          addr < (unsigned long)_end) ||
  (addr >= (unsigned long)__init_begin &&
   addr < (unsigned long)__init_end));
}
# 2 "./arch/hexagon/include/generated/asm/sections.h" 2
# 14 "../arch/hexagon/include/asm/uaccess.h" 2
# 25 "../arch/hexagon/include/asm/uaccess.h"
unsigned long raw_copy_from_user(void *to, const void *from,
         unsigned long n);
unsigned long raw_copy_to_user(void *to, const void *from,
       unsigned long n);



__kernel_size_t __clear_user_hexagon(void *dest, unsigned long count);


# 1 "../include/asm-generic/uaccess.h" 1
# 11 "../include/asm-generic/uaccess.h"
# 1 "../include/asm-generic/access_ok.h" 1
# 31 "../include/asm-generic/access_ok.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __access_ok(const void *ptr, unsigned long size)
{
 unsigned long limit = ((0xc0000000UL));
 unsigned long addr = (unsigned long)ptr;

 if (0 ||
     !1)
  return true;

 return (size <= limit) && (addr <= (limit - size));
}
# 12 "../include/asm-generic/uaccess.h" 2
# 135 "../include/asm-generic/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __put_user_fn(size_t size, void *ptr, void *x)
{
 return __builtin_expect(!!(raw_copy_to_user(ptr, x, size)), 0) ? -14 : 0;
}





extern int __put_user_bad(void) __attribute__((noreturn));
# 196 "../include/asm-generic/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __get_user_fn(size_t size, const void *ptr, void *x)
{
 return __builtin_expect(!!(raw_copy_from_user(x, ptr, size)), 0) ? -14 : 0;
}





extern int __get_user_bad(void) __attribute__((noreturn));
# 219 "../include/asm-generic/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) unsigned long
clear_user(void *to, unsigned long n)
{
 __might_fault("include/asm-generic/uaccess.h", 222);
 if (!__builtin_expect(!!(__access_ok(to, n)), 1))
  return n;

 return __clear_user_hexagon((to), (n));
}

# 1 "./arch/hexagon/include/generated/asm/extable.h" 1
# 1 "../include/asm-generic/extable.h" 1
# 18 "../include/asm-generic/extable.h"
struct exception_table_entry
{
 unsigned long insn, fixup;
};


struct pt_regs;
extern int fixup_exception(struct pt_regs *regs);
# 2 "./arch/hexagon/include/generated/asm/extable.h" 2
# 230 "../include/asm-generic/uaccess.h" 2

__attribute__((__warn_unused_result__)) long strncpy_from_user(char *dst, const char *src,
        long count);
__attribute__((__warn_unused_result__)) long strnlen_user(const char *src, long n);
# 36 "../arch/hexagon/include/asm/uaccess.h" 2
# 13 "../include/linux/uaccess.h" 2
# 81 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) unsigned long
__copy_from_user_inatomic(void *to, const void *from, unsigned long n)
{
 unsigned long res;

 instrument_copy_from_user_before(to, from, n);
 check_object_size(to, n, false);
 res = raw_copy_from_user(to, from, n);
 instrument_copy_from_user_after(to, from, n, res);
 return res;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) unsigned long
__copy_from_user(void *to, const void *from, unsigned long n)
{
 unsigned long res;

 __might_fault("include/linux/uaccess.h", 98);
 instrument_copy_from_user_before(to, from, n);
 if (should_fail_usercopy())
  return n;
 check_object_size(to, n, false);
 res = raw_copy_from_user(to, from, n);
 instrument_copy_from_user_after(to, from, n, res);
 return res;
}
# 121 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) unsigned long
__copy_to_user_inatomic(void *to, const void *from, unsigned long n)
{
 if (should_fail_usercopy())
  return n;
 instrument_copy_to_user(to, from, n);
 check_object_size(from, n, true);
 return raw_copy_to_user(to, from, n);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) unsigned long
__copy_to_user(void *to, const void *from, unsigned long n)
{
 __might_fault("include/linux/uaccess.h", 134);
 if (should_fail_usercopy())
  return n;
 instrument_copy_to_user(to, from, n);
 check_object_size(from, n, true);
 return raw_copy_to_user(to, from, n);
}
# 150 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) unsigned long
_inline_copy_from_user(void *to, const void *from, unsigned long n)
{
 unsigned long res = n;
 __might_fault("include/linux/uaccess.h", 154);
 if (!should_fail_usercopy() && __builtin_expect(!!(__builtin_expect(!!(__access_ok(from, n)), 1)), 1)) {





  do { } while (0);
  instrument_copy_from_user_before(to, from, n);
  res = raw_copy_from_user(to, from, n);
  instrument_copy_from_user_after(to, from, n, res);
 }
 if (__builtin_expect(!!(res), 0))
  memset(to + (n - res), 0, res);
 return res;
}
extern __attribute__((__warn_unused_result__)) unsigned long
_copy_from_user(void *, const void *, unsigned long);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) unsigned long
_inline_copy_to_user(void *to, const void *from, unsigned long n)
{
 __might_fault("include/linux/uaccess.h", 176);
 if (should_fail_usercopy())
  return n;
 if (__builtin_expect(!!(__access_ok(to, n)), 1)) {
  instrument_copy_to_user(to, from, n);
  n = raw_copy_to_user(to, from, n);
 }
 return n;
}
extern __attribute__((__warn_unused_result__)) unsigned long
_copy_to_user(void *, const void *, unsigned long);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long __attribute__((__warn_unused_result__))
copy_from_user(void *to, const void *from, unsigned long n)
{
 if (!check_copy_size(to, n, false))
  return n;

 return _inline_copy_from_user(to, from, n);



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned long __attribute__((__warn_unused_result__))
copy_to_user(void *to, const void *from, unsigned long n)
{
 if (!check_copy_size(from, n, true))
  return n;


 return _inline_copy_to_user(to, from, n);



}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __attribute__((__warn_unused_result__))
copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
{
 memcpy(dst, src, cnt);
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void pagefault_disabled_inc(void)
{
 (__current_thread_info->task)->pagefault_disabled++;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void pagefault_disabled_dec(void)
{
 (__current_thread_info->task)->pagefault_disabled--;
}
# 243 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagefault_disable(void)
{
 pagefault_disabled_inc();




 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagefault_enable(void)
{




 __asm__ __volatile__("": : :"memory");
 pagefault_disabled_dec();
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pagefault_disabled(void)
{
 return (__current_thread_info->task)->pagefault_disabled != 0;
}
# 298 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t probe_subpage_writeable(char *uaddr, size_t size)
{
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) unsigned long
__copy_from_user_inatomic_nocache(void *to, const void *from,
      unsigned long n)
{
 return __copy_from_user_inatomic(to, from, n);
}



extern __attribute__((__warn_unused_result__)) int check_zeroed_user(const void *from, size_t size);
# 365 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) int
copy_struct_from_user(void *dst, size_t ksize, const void *src,
        size_t usize)
{
 size_t size = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((ksize) - (usize)) * 0l)) : (int *)8))), ((ksize) < (usize) ? (ksize) : (usize)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1)) * 0l)) : (int *)8))), (((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1)) * 0l)) : (int *)8))), (((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(ksize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0), ksize, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(usize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0), usize, -1) >= 0)), "min" "(" "ksize" ", " "usize" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_209 = (ksize); __auto_type __UNIQUE_ID_y_210 = (usize); ((__UNIQUE_ID_x_209) < (__UNIQUE_ID_y_210) ? (__UNIQUE_ID_x_209) : (__UNIQUE_ID_y_210)); }); }));
 size_t rest = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((ksize) - (usize)) * 0l)) : (int *)8))), ((ksize) > (usize) ? (ksize) : (usize)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1)) * 0l)) : (int *)8))), (((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1)) * 0l)) : (int *)8))), (((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(ksize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0), ksize, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(usize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0), usize, -1) >= 0)), "max" "(" "ksize" ", " "usize" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_211 = (ksize); __auto_type __UNIQUE_ID_y_212 = (usize); ((__UNIQUE_ID_x_211) > (__UNIQUE_ID_y_212) ? (__UNIQUE_ID_x_211) : (__UNIQUE_ID_y_212)); }); })) - size;


 if (({ bool __ret_do_once = !!(ksize > __builtin_object_size(dst, 1)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/uaccess.h", 373, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return -7;


 if (usize < ksize) {
  memset(dst + size, 0, rest);
 } else if (usize > ksize) {
  int ret = check_zeroed_user(src + size, rest);
  if (ret <= 0)
   return ret ?: -7;
 }

 if (copy_from_user(dst, src, size))
  return -14;
 return 0;
}

bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size);

long copy_from_kernel_nofault(void *dst, const void *src, size_t size);
long __attribute__((__no_instrument_function__)) copy_to_kernel_nofault(void *dst, const void *src, size_t size);

long copy_from_user_nofault(void *dst, const void *src, size_t size);
long __attribute__((__no_instrument_function__)) copy_to_user_nofault(void *dst, const void *src,
  size_t size);

long strncpy_from_kernel_nofault(char *dst, const void *unsafe_addr,
  long count);

long strncpy_from_user_nofault(char *dst, const void *unsafe_addr,
  long count);
long strnlen_user_nofault(const void *unsafe_addr, long count);
# 445 "../include/linux/uaccess.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long user_access_save(void) { return 0UL; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void user_access_restore(unsigned long flags) { }
# 458 "../include/linux/uaccess.h"
void __attribute__((__noreturn__)) usercopy_abort(const char *name, const char *detail,
          bool to_user, unsigned long offset,
          unsigned long len);
# 14 "../include/linux/sched/task.h" 2

struct task_struct;
struct rusage;
union thread_union;
struct css_set;




struct kernel_clone_args {
 u64 flags;
 int *pidfd;
 int *child_tid;
 int *parent_tid;
 const char *name;
 int exit_signal;
 u32 kthread:1;
 u32 io_thread:1;
 u32 user_worker:1;
 u32 no_files:1;
 unsigned long stack;
 unsigned long stack_size;
 unsigned long tls;
 pid_t *set_tid;

 size_t set_tid_size;
 int cgroup;
 int idle;
 int (*fn)(void *);
 void *fn_arg;
 struct cgroup *cgrp;
 struct css_set *cset;
};







extern rwlock_t tasklist_lock;
extern spinlock_t mmlist_lock;

extern union thread_union init_thread_union;
extern struct task_struct init_task;

extern int lockdep_tasklist_lock_is_held(void);

extern void schedule_tail(struct task_struct *prev);
extern void init_idle(struct task_struct *idle, int cpu);

extern int sched_fork(unsigned long clone_flags, struct task_struct *p);
extern int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs);
extern void sched_cancel_fork(struct task_struct *p);
extern void sched_post_fork(struct task_struct *p);
extern void sched_dead(struct task_struct *p);

void __attribute__((__noreturn__)) do_task_dead(void);
void __attribute__((__noreturn__)) make_task_dead(int signr);

extern void mm_cache_init(void);
extern void proc_caches_init(void);

extern void fork_init(void);

extern void release_task(struct task_struct * p);

extern int copy_thread(struct task_struct *, const struct kernel_clone_args *);

extern void flush_thread(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void exit_thread(struct task_struct *tsk)
{
}

extern __attribute__((__noreturn__)) void do_group_exit(int);

extern void exit_files(struct task_struct *);
extern void exit_itimers(struct task_struct *);

extern pid_t kernel_clone(struct kernel_clone_args *kargs);
struct task_struct *copy_process(struct pid *pid, int trace, int node,
     struct kernel_clone_args *args);
struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
struct task_struct *fork_idle(int);
extern pid_t kernel_thread(int (*fn)(void *), void *arg, const char *name,
       unsigned long flags);
extern pid_t user_mode_thread(int (*fn)(void *), void *arg, unsigned long flags);
extern long kernel_wait4(pid_t, int *, int, struct rusage *);
int kernel_wait(pid_t pid, int *stat);

extern void free_task(struct task_struct *tsk);
# 117 "../include/linux/sched/task.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *get_task_struct(struct task_struct *t)
{
 refcount_inc(&t->usage);
 return t;
}

extern void __put_task_struct(struct task_struct *t);
extern void __put_task_struct_rcu_cb(struct callback_head *rhp);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_task_struct(struct task_struct *t)
{
 if (!refcount_dec_and_test(&t->usage))
  return;





 if (!0 || 0) {
  static struct lockdep_map put_task_map = { .name = "put_task_map" "-wait-type-override", .wait_type_inner = LD_WAIT_SLEEP, .lock_type = LD_LOCK_WAIT_OVERRIDE, };

  lock_acquire(&put_task_map, 0, 1, 0, 1, ((void *)0), ({ __label__ __here; __here: (unsigned long)&&__here; }));
  __put_task_struct(t);
  lock_release(&put_task_map, ({ __label__ __here; __here: (unsigned long)&&__here; }));
  return;
 }
# 164 "../include/linux/sched/task.h"
 call_rcu(&t->rcu, __put_task_struct_rcu_cb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_put_task(void *p) { struct task_struct * _T = *(struct task_struct * *)p; if (_T) put_task_struct(_T); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_task_struct_many(struct task_struct *t, int nr)
{
 if (refcount_sub_and_test(nr, &t->usage))
  __put_task_struct(t);
}

void put_task_struct_rcu_user(struct task_struct *task);


void release_thread(struct task_struct *dead_task);
# 191 "../include/linux/sched/task.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_thread_struct_whitelist(unsigned long *offset,
      unsigned long *size)
{
 *offset = 0;

 *size = (sizeof(struct task_struct)) - __builtin_offsetof(struct task_struct, thread);
}
# 206 "../include/linux/sched/task.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_struct *task_stack_vm_area(const struct task_struct *t)
{
 return ((void *)0);
}
# 222 "../include/linux/sched/task.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_lock(struct task_struct *p)
{
 spin_lock(&p->alloc_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void task_unlock(struct task_struct *p)
{
 spin_unlock(&p->alloc_lock);
}

typedef struct task_struct * class_task_lock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_task_lock_destructor(struct task_struct * *p) { struct task_struct * _T = *p; if (_T) { task_unlock(_T); }; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct * class_task_lock_constructor(struct task_struct * _T) { struct task_struct * t = ({ task_lock(_T); _T; }); return t; }; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * class_task_lock_lock_ptr(class_task_lock_t *_T) { return *_T; }
# 10 "../include/linux/sched/signal.h" 2
# 1 "../include/linux/cred.h" 1
# 13 "../include/linux/cred.h"
# 1 "../include/linux/key.h" 1
# 20 "../include/linux/key.h"
# 1 "../include/linux/assoc_array.h" 1
# 22 "../include/linux/assoc_array.h"
struct assoc_array {
 struct assoc_array_ptr *root;
 unsigned long nr_leaves_on_tree;
};




struct assoc_array_ops {

 unsigned long (*get_key_chunk)(const void *index_key, int level);


 unsigned long (*get_object_key_chunk)(const void *object, int level);


 bool (*compare_object)(const void *object, const void *index_key);




 int (*diff_objects)(const void *object, const void *index_key);


 void (*free_object)(void *object);
};




struct assoc_array_edit;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void assoc_array_init(struct assoc_array *array)
{
 array->root = ((void *)0);
 array->nr_leaves_on_tree = 0;
}

extern int assoc_array_iterate(const struct assoc_array *array,
          int (*iterator)(const void *object,
            void *iterator_data),
          void *iterator_data);
extern void *assoc_array_find(const struct assoc_array *array,
         const struct assoc_array_ops *ops,
         const void *index_key);
extern void assoc_array_destroy(struct assoc_array *array,
    const struct assoc_array_ops *ops);
extern struct assoc_array_edit *assoc_array_insert(struct assoc_array *array,
         const struct assoc_array_ops *ops,
         const void *index_key,
         void *object);
extern void assoc_array_insert_set_object(struct assoc_array_edit *edit,
       void *object);
extern struct assoc_array_edit *assoc_array_delete(struct assoc_array *array,
         const struct assoc_array_ops *ops,
         const void *index_key);
extern struct assoc_array_edit *assoc_array_clear(struct assoc_array *array,
        const struct assoc_array_ops *ops);
extern void assoc_array_apply_edit(struct assoc_array_edit *edit);
extern void assoc_array_cancel_edit(struct assoc_array_edit *edit);
extern int assoc_array_gc(struct assoc_array *array,
     const struct assoc_array_ops *ops,
     bool (*iterator)(void *object, void *iterator_data),
     void *iterator_data);
# 21 "../include/linux/key.h" 2







typedef int32_t key_serial_t;


typedef uint32_t key_perm_t;

struct key;
struct net;
# 77 "../include/linux/key.h"
enum key_need_perm {
 KEY_NEED_UNSPECIFIED,
 KEY_NEED_VIEW,
 KEY_NEED_READ,
 KEY_NEED_WRITE,
 KEY_NEED_SEARCH,
 KEY_NEED_LINK,
 KEY_NEED_SETATTR,
 KEY_NEED_UNLINK,
 KEY_SYSADMIN_OVERRIDE,
 KEY_AUTHTOKEN_OVERRIDE,
 KEY_DEFER_PERM_CHECK,
};

enum key_lookup_flag {
 KEY_LOOKUP_CREATE = 0x01,
 KEY_LOOKUP_PARTIAL = 0x02,
 KEY_LOOKUP_ALL = (KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL),
};

struct seq_file;
struct user_struct;
struct signal_struct;
struct cred;

struct key_type;
struct key_owner;
struct key_tag;
struct keyring_list;
struct keyring_name;

struct key_tag {
 struct callback_head rcu;
 refcount_t usage;
 bool removed;
};

struct keyring_index_key {

 unsigned long hash;
 union {
  struct {

   u16 desc_len;
   char desc[sizeof(long) - 2];




  };
  unsigned long x;
 };
 struct key_type *type;
 struct key_tag *domain_tag;
 const char *description;
};

union key_payload {
 void *rcu_data0;
 void *data[4];
};
# 153 "../include/linux/key.h"
typedef struct __key_reference_with_attributes *key_ref_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) key_ref_t make_key_ref(const struct key *key,
         bool possession)
{
 return (key_ref_t) ((unsigned long) key | possession);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct key *key_ref_to_ptr(const key_ref_t key_ref)
{
 return (struct key *) ((unsigned long) key_ref & ~1UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_key_possessed(const key_ref_t key_ref)
{
 return (unsigned long) key_ref & 1UL;
}

typedef int (*key_restrict_link_func_t)(struct key *dest_keyring,
     const struct key_type *type,
     const union key_payload *payload,
     struct key *restriction_key);

struct key_restriction {
 key_restrict_link_func_t check;
 struct key *key;
 struct key_type *keytype;
};

enum key_state {
 KEY_IS_UNINSTANTIATED,
 KEY_IS_POSITIVE,
};
# 195 "../include/linux/key.h"
struct key {
 refcount_t usage;
 key_serial_t serial;
 union {
  struct list_head graveyard_link;
  struct rb_node serial_node;
 };



 struct rw_semaphore sem;
 struct key_user *user;
 void *security;
 union {
  time64_t expiry;
  time64_t revoked_at;
 };
 time64_t last_used_at;
 kuid_t uid;
 kgid_t gid;
 key_perm_t perm;
 unsigned short quotalen;
 unsigned short datalen;



 short state;






 unsigned long flags;
# 245 "../include/linux/key.h"
 union {
  struct keyring_index_key index_key;
  struct {
   unsigned long hash;
   unsigned long len_desc;
   struct key_type *type;
   struct key_tag *domain_tag;
   char *description;
  };
 };





 union {
  union key_payload payload;
  struct {

   struct list_head name_link;
   struct assoc_array keys;
  };
 };
# 280 "../include/linux/key.h"
 struct key_restriction *restrict_link;
};

extern struct key *key_alloc(struct key_type *type,
        const char *desc,
        kuid_t uid, kgid_t gid,
        const struct cred *cred,
        key_perm_t perm,
        unsigned long flags,
        struct key_restriction *restrict_link);
# 300 "../include/linux/key.h"
extern void key_revoke(struct key *key);
extern void key_invalidate(struct key *key);
extern void key_put(struct key *key);
extern bool key_put_tag(struct key_tag *tag);
extern void key_remove_domain(struct key_tag *domain_tag);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct key *__key_get(struct key *key)
{
 refcount_inc(&key->usage);
 return key;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct key *key_get(struct key *key)
{
 return key ? __key_get(key) : key;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void key_ref_put(key_ref_t key_ref)
{
 key_put(key_ref_to_ptr(key_ref));
}

extern struct key *request_key_tag(struct key_type *type,
       const char *description,
       struct key_tag *domain_tag,
       const char *callout_info);

extern struct key *request_key_rcu(struct key_type *type,
       const char *description,
       struct key_tag *domain_tag);

extern struct key *request_key_with_auxdata(struct key_type *type,
         const char *description,
         struct key_tag *domain_tag,
         const void *callout_info,
         size_t callout_len,
         void *aux);
# 346 "../include/linux/key.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct key *request_key(struct key_type *type,
          const char *description,
          const char *callout_info)
{
 return request_key_tag(type, description, ((void *)0), callout_info);
}
# 385 "../include/linux/key.h"
extern int wait_for_key_construction(struct key *key, bool intr);

extern int key_validate(const struct key *key);

extern key_ref_t key_create(key_ref_t keyring,
       const char *type,
       const char *description,
       const void *payload,
       size_t plen,
       key_perm_t perm,
       unsigned long flags);

extern key_ref_t key_create_or_update(key_ref_t keyring,
          const char *type,
          const char *description,
          const void *payload,
          size_t plen,
          key_perm_t perm,
          unsigned long flags);

extern int key_update(key_ref_t key,
        const void *payload,
        size_t plen);

extern int key_link(struct key *keyring,
      struct key *key);

extern int key_move(struct key *key,
      struct key *from_keyring,
      struct key *to_keyring,
      unsigned int flags);

extern int key_unlink(struct key *keyring,
        struct key *key);

extern struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid,
     const struct cred *cred,
     key_perm_t perm,
     unsigned long flags,
     struct key_restriction *restrict_link,
     struct key *dest);

extern int restrict_link_reject(struct key *keyring,
    const struct key_type *type,
    const union key_payload *payload,
    struct key *restriction_key);

extern int keyring_clear(struct key *keyring);

extern key_ref_t keyring_search(key_ref_t keyring,
    struct key_type *type,
    const char *description,
    bool recurse);

extern int keyring_add_key(struct key *keyring,
      struct key *key);

extern int keyring_restrict(key_ref_t keyring, const char *type,
       const char *restriction);

extern struct key *key_lookup(key_serial_t id);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) key_serial_t key_serial(const struct key *key)
{
 return key ? key->serial : 0;
}

extern void key_set_timeout(struct key *, unsigned);

extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags,
     enum key_need_perm need_perm);
extern void key_free_user_ns(struct user_namespace *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) short key_read_state(const struct key *key)
{

 return ({ typeof( _Generic((*&key->state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&key->state))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_213(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&key->state) == sizeof(char) || sizeof(*&key->state) == sizeof(short) || sizeof(*&key->state) == sizeof(int) || sizeof(*&key->state) == sizeof(long)) || sizeof(*&key->state) == sizeof(long long))) __compiletime_assert_213(); } while (0); (*(const volatile typeof( _Generic((*&key->state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&key->state))) *)&(*&key->state)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&key->state))___p1; });
}
# 471 "../include/linux/key.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool key_is_positive(const struct key *key)
{
 return key_read_state(key) == KEY_IS_POSITIVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool key_is_negative(const struct key *key)
{
 return key_read_state(key) < 0;
}
# 496 "../include/linux/key.h"
extern int install_thread_keyring_to_cred(struct cred *cred);
extern void key_fsuid_changed(struct cred *new_cred);
extern void key_fsgid_changed(struct cred *new_cred);
extern void key_init(void);
# 14 "../include/linux/cred.h" 2




# 1 "../include/linux/sched/user.h" 1
# 14 "../include/linux/sched/user.h"
struct user_struct {
 refcount_t __count;

 struct percpu_counter epoll_watches;

 unsigned long unix_inflight;
 atomic_long_t pipe_bufs;


 struct hlist_node uidhash_node;
 kuid_t uid;




 atomic_long_t locked_vm;






 struct ratelimit_state ratelimit;
};

extern int uids_sysfs_init(void);

extern struct user_struct *find_user(kuid_t);

extern struct user_struct root_user;




extern struct user_struct * alloc_uid(kuid_t);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_struct *get_uid(struct user_struct *u)
{
 refcount_inc(&u->__count);
 return u;
}
extern void free_uid(struct user_struct *);
# 19 "../include/linux/cred.h" 2

struct cred;
struct inode;




struct group_info {
 refcount_t usage;
 int ngroups;
 kgid_t gid[];
} ;
# 41 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct group_info *get_group_info(struct group_info *gi)
{
 refcount_inc(&gi->usage);
 return gi;
}
# 58 "../include/linux/cred.h"
extern struct group_info *groups_alloc(int);
extern void groups_free(struct group_info *);

extern int in_group_p(kgid_t);
extern int in_egroup_p(kgid_t);
extern int groups_search(const struct group_info *, kgid_t);

extern int set_current_groups(struct group_info *);
extern void set_groups(struct cred *, struct group_info *);
extern bool may_setgroups(void);
extern void groups_sort(struct group_info *);
# 111 "../include/linux/cred.h"
struct cred {
 atomic_long_t usage;
 kuid_t uid;
 kgid_t gid;
 kuid_t suid;
 kgid_t sgid;
 kuid_t euid;
 kgid_t egid;
 kuid_t fsuid;
 kgid_t fsgid;
 unsigned securebits;
 kernel_cap_t cap_inheritable;
 kernel_cap_t cap_permitted;
 kernel_cap_t cap_effective;
 kernel_cap_t cap_bset;
 kernel_cap_t cap_ambient;

 unsigned char jit_keyring;

 struct key *session_keyring;
 struct key *process_keyring;
 struct key *thread_keyring;
 struct key *request_key_auth;




 struct user_struct *user;
 struct user_namespace *user_ns;
 struct ucounts *ucounts;
 struct group_info *group_info;

 union {
  int non_rcu;
  struct callback_head rcu;
 };
} ;

extern void __put_cred(struct cred *);
extern void exit_creds(struct task_struct *);
extern int copy_creds(struct task_struct *, unsigned long);
extern const struct cred *get_task_cred(struct task_struct *);
extern struct cred *cred_alloc_blank(void);
extern struct cred *prepare_creds(void);
extern struct cred *prepare_exec_creds(void);
extern int commit_creds(struct cred *);
extern void abort_creds(struct cred *);
extern const struct cred *override_creds(const struct cred *);
extern void revert_creds(const struct cred *);
extern struct cred *prepare_kernel_cred(struct task_struct *);
extern int set_security_override(struct cred *, u32);
extern int set_security_override_from_ctx(struct cred *, const char *);
extern int set_create_files_as(struct cred *, struct inode *);
extern int cred_fscmp(const struct cred *, const struct cred *);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) cred_init(void);
extern int set_cred_ucounts(struct cred *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cap_ambient_invariant_ok(const struct cred *cred)
{
 return cap_issubset(cred->cap_ambient,
       cap_intersect(cred->cap_permitted,
       cred->cap_inheritable));
}
# 183 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cred *get_new_cred_many(struct cred *cred, int nr)
{
 atomic_long_add(nr, &cred->usage);
 return cred;
}
# 196 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cred *get_new_cred(struct cred *cred)
{
 return get_new_cred_many(cred, 1);
}
# 215 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cred *get_cred_many(const struct cred *cred, int nr)
{
 struct cred *nonconst_cred = (struct cred *) cred;
 if (!cred)
  return cred;
 nonconst_cred->non_rcu = 0;
 return get_new_cred_many(nonconst_cred, nr);
}
# 233 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cred *get_cred(const struct cred *cred)
{
 return get_cred_many(cred, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cred *get_cred_rcu(const struct cred *cred)
{
 struct cred *nonconst_cred = (struct cred *) cred;
 if (!cred)
  return ((void *)0);
 if (!atomic_long_inc_not_zero(&nonconst_cred->usage))
  return ((void *)0);
 nonconst_cred->non_rcu = 0;
 return cred;
}
# 261 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_cred_many(const struct cred *_cred, int nr)
{
 struct cred *cred = (struct cred *) _cred;

 if (cred) {
  if (atomic_long_sub_and_test(nr, &cred->usage))
   __put_cred(cred);
 }
}
# 278 "../include/linux/cred.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_cred(const struct cred *cred)
{
 put_cred_many(cred, 1);
}
# 384 "../include/linux/cred.h"
extern struct user_namespace init_user_ns;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_namespace *current_user_ns(void)
{
 return &init_user_ns;
}
# 11 "../include/linux/sched/signal.h" 2


# 1 "../include/linux/posix-timers.h" 1




# 1 "../include/linux/alarmtimer.h" 1








struct rtc_device;

enum alarmtimer_type {
 ALARM_REALTIME,
 ALARM_BOOTTIME,


 ALARM_NUMTYPE,


 ALARM_REALTIME_FREEZER,
 ALARM_BOOTTIME_FREEZER,
};

enum alarmtimer_restart {
 ALARMTIMER_NORESTART,
 ALARMTIMER_RESTART,
};
# 42 "../include/linux/alarmtimer.h"
struct alarm {
 struct timerqueue_node node;
 struct hrtimer timer;
 enum alarmtimer_restart (*function)(struct alarm *, ktime_t now);
 enum alarmtimer_type type;
 int state;
 void *data;
};

void alarm_init(struct alarm *alarm, enum alarmtimer_type type,
  enum alarmtimer_restart (*function)(struct alarm *, ktime_t));
void alarm_start(struct alarm *alarm, ktime_t start);
void alarm_start_relative(struct alarm *alarm, ktime_t start);
void alarm_restart(struct alarm *alarm);
int alarm_try_to_cancel(struct alarm *alarm);
int alarm_cancel(struct alarm *alarm);

u64 alarm_forward(struct alarm *alarm, ktime_t now, ktime_t interval);
u64 alarm_forward_now(struct alarm *alarm, ktime_t interval);
ktime_t alarm_expires_remaining(const struct alarm *alarm);



struct rtc_device *alarmtimer_get_rtcdev(void);
# 6 "../include/linux/posix-timers.h" 2






struct kernel_siginfo;
struct task_struct;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) clockid_t make_process_cpuclock(const unsigned int pid,
  const clockid_t clock)
{
 return ((~pid) << 3) | clock;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) clockid_t make_thread_cpuclock(const unsigned int tid,
  const clockid_t clock)
{
 return make_process_cpuclock(tid, clock | 4);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) clockid_t fd_to_clockid(const int fd)
{
 return make_process_cpuclock((unsigned int) fd, 3);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int clockid_to_fd(const clockid_t clk)
{
 return ~(clk >> 3);
}
# 47 "../include/linux/posix-timers.h"
struct cpu_timer {
 struct timerqueue_node node;
 struct timerqueue_head *head;
 struct pid *pid;
 struct list_head elist;
 int firing;
 struct task_struct *handling;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_timer_enqueue(struct timerqueue_head *head,
         struct cpu_timer *ctmr)
{
 ctmr->head = head;
 return timerqueue_add(head, &ctmr->node);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_timer_queued(struct cpu_timer *ctmr)
{
 return !!ctmr->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_timer_dequeue(struct cpu_timer *ctmr)
{
 if (cpu_timer_queued(ctmr)) {
  timerqueue_del(ctmr->head, &ctmr->node);
  ctmr->head = ((void *)0);
  return true;
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 cpu_timer_getexpires(struct cpu_timer *ctmr)
{
 return ctmr->node.expires;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_timer_setexpires(struct cpu_timer *ctmr, u64 exp)
{
 ctmr->node.expires = exp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void posix_cputimers_init(struct posix_cputimers *pct)
{
 memset(pct, 0, sizeof(*pct));
 pct->bases[0].nextevt = ((u64)~0ULL);
 pct->bases[1].nextevt = ((u64)~0ULL);
 pct->bases[2].nextevt = ((u64)~0ULL);
}

void posix_cputimers_group_init(struct posix_cputimers *pct, u64 cpu_limit);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void posix_cputimers_rt_watchdog(struct posix_cputimers *pct,
            u64 runtime)
{
 pct->bases[2].nextevt = runtime;
}
# 131 "../include/linux/posix-timers.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_posix_cputimers_work(struct task_struct *p) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void posix_cputimers_init_work(void) { }
# 160 "../include/linux/posix-timers.h"
struct k_itimer {
 struct list_head list;
 struct hlist_node t_hash;
 spinlock_t it_lock;
 const struct k_clock *kclock;
 clockid_t it_clock;
 timer_t it_id;
 int it_active;
 s64 it_overrun;
 s64 it_overrun_last;
 int it_requeue_pending;
 int it_sigev_notify;
 ktime_t it_interval;
 struct signal_struct *it_signal;
 union {
  struct pid *it_pid;
  struct task_struct *it_process;
 };
 struct sigqueue *sigq;
 union {
  struct {
   struct hrtimer timer;
  } real;
  struct cpu_timer cpu;
  struct {
   struct alarm alarmtimer;
  } alarm;
 } it;
 struct callback_head rcu;
};

void run_posix_cpu_timers(void);
void posix_cpu_timers_exit(struct task_struct *task);
void posix_cpu_timers_exit_group(struct task_struct *task);
void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx,
      u64 *newval, u64 *oldval);

int update_rlimit_cpu(struct task_struct *task, unsigned long rlim_new);

void posixtimer_rearm(struct kernel_siginfo *info);
# 14 "../include/linux/sched/signal.h" 2







struct sighand_struct {
 spinlock_t siglock;
 refcount_t count;
 wait_queue_head_t signalfd_wqh;
 struct k_sigaction action[64];
};




struct pacct_struct {
 int ac_flag;
 long ac_exitcode;
 unsigned long ac_mem;
 u64 ac_utime, ac_stime;
 unsigned long ac_minflt, ac_majflt;
};

struct cpu_itimer {
 u64 expires;
 u64 incr;
};





struct task_cputime_atomic {
 atomic64_t utime;
 atomic64_t stime;
 atomic64_t sum_exec_runtime;
};
# 67 "../include/linux/sched/signal.h"
struct thread_group_cputimer {
 struct task_cputime_atomic cputime_atomic;
};

struct multiprocess_signals {
 sigset_t signal;
 struct hlist_node node;
};

struct core_thread {
 struct task_struct *task;
 struct core_thread *next;
};

struct core_state {
 atomic_t nr_threads;
 struct core_thread dumper;
 struct completion startup;
};
# 94 "../include/linux/sched/signal.h"
struct signal_struct {
 refcount_t sigcnt;
 atomic_t live;
 int nr_threads;
 int quick_threads;
 struct list_head thread_head;

 wait_queue_head_t wait_chldexit;


 struct task_struct *curr_target;


 struct sigpending shared_pending;


 struct hlist_head multiprocess;


 int group_exit_code;

 int notify_count;
 struct task_struct *group_exec_task;


 int group_stop_count;
 unsigned int flags;

 struct core_state *core_state;
# 133 "../include/linux/sched/signal.h"
 unsigned int is_child_subreaper:1;
 unsigned int has_child_subreaper:1;




 unsigned int next_posix_timer_id;
 struct list_head posix_timers;


 struct hrtimer real_timer;
 ktime_t it_real_incr;






 struct cpu_itimer it[2];





 struct thread_group_cputimer cputimer;



 struct posix_cputimers posix_cputimers;


 struct pid *pids[PIDTYPE_MAX];





 struct pid *tty_old_pgrp;


 int leader;

 struct tty_struct *tty;
# 186 "../include/linux/sched/signal.h"
 seqlock_t stats_lock;
 u64 utime, stime, cutime, cstime;
 u64 gtime;
 u64 cgtime;
 struct prev_cputime prev_cputime;
 unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
 unsigned long min_flt, maj_flt, cmin_flt, cmaj_flt;
 unsigned long inblock, oublock, cinblock, coublock;
 unsigned long maxrss, cmaxrss;
 struct task_io_accounting ioac;







 unsigned long long sum_sched_runtime;
# 214 "../include/linux/sched/signal.h"
 struct rlimit rlim[16];
# 231 "../include/linux/sched/signal.h"
 bool oom_flag_origin;
 short oom_score_adj;
 short oom_score_adj_min;

 struct mm_struct *oom_mm;


 struct mutex cred_guard_mutex;





 struct rw_semaphore exec_update_lock;




} ;
# 269 "../include/linux/sched/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void signal_set_stop_flags(struct signal_struct *sig,
      unsigned int flags)
{
 ({ int __ret_warn_on = !!(sig->flags & 0x00000004); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/sched/signal.h", 272, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 sig->flags = (sig->flags & ~((0x00000010|0x00000020) | 0x00000001 | 0x00000002)) | flags;
}

extern void flush_signals(struct task_struct *);
extern void ignore_signals(struct task_struct *);
extern void flush_signal_handlers(struct task_struct *, int force_default);
extern int dequeue_signal(struct task_struct *task, sigset_t *mask,
     kernel_siginfo_t *info, enum pid_type *type);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kernel_dequeue_signal(void)
{
 struct task_struct *task = (__current_thread_info->task);
 kernel_siginfo_t __info;
 enum pid_type __type;
 int ret;

 spin_lock_irq(&task->sighand->siglock);
 ret = dequeue_signal(task, &task->blocked, &__info, &__type);
 spin_unlock_irq(&task->sighand->siglock);

 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kernel_signal_stop(void)
{
 spin_lock_irq(&(__current_thread_info->task)->sighand->siglock);
 if ((__current_thread_info->task)->jobctl & (1UL << 16)) {
  (__current_thread_info->task)->jobctl |= (1UL << 26);
  do { unsigned long flags; do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = _raw_spin_lock_irqsave(&(__current_thread_info->task)->pi_lock); } while (0); do { } while (0); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_214(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((__current_thread_info->task)->__state) == sizeof(char) || sizeof((__current_thread_info->task)->__state) == sizeof(short) || sizeof((__current_thread_info->task)->__state) == sizeof(int) || sizeof((__current_thread_info->task)->__state) == sizeof(long)) || sizeof((__current_thread_info->task)->__state) == sizeof(long long))) __compiletime_assert_214(); } while (0); do { *(volatile typeof((__current_thread_info->task)->__state) *)&((__current_thread_info->task)->__state) = (((0x00000100 | 0x00000004))); } while (0); } while (0); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); _raw_spin_unlock_irqrestore(&(__current_thread_info->task)->pi_lock, flags); } while (0); } while (0);
 }
 spin_unlock_irq(&(__current_thread_info->task)->sighand->siglock);

 schedule();
}

int force_sig_fault_to_task(int sig, int code, void *addr,
       struct task_struct *t);
int force_sig_fault(int sig, int code, void *addr);
int send_sig_fault(int sig, int code, void *addr, struct task_struct *t);

int force_sig_mceerr(int code, void *, short);
int send_sig_mceerr(int code, void *, short, struct task_struct *);

int force_sig_bnderr(void *addr, void *lower, void *upper);
int force_sig_pkuerr(void *addr, u32 pkey);
int send_sig_perf(void *addr, u32 type, u64 sig_data);

int force_sig_ptrace_errno_trap(int errno, void *addr);
int force_sig_fault_trapno(int sig, int code, void *addr, int trapno);
int send_sig_fault_trapno(int sig, int code, void *addr, int trapno,
   struct task_struct *t);
int force_sig_seccomp(int syscall, int reason, bool force_coredump);

extern int send_sig_info(int, struct kernel_siginfo *, struct task_struct *);
extern void force_sigsegv(int sig);
extern int force_sig_info(struct kernel_siginfo *);
extern int __kill_pgrp_info(int sig, struct kernel_siginfo *info, struct pid *pgrp);
extern int kill_pid_info(int sig, struct kernel_siginfo *info, struct pid *pid);
extern int kill_pid_usb_asyncio(int sig, int errno, sigval_t addr, struct pid *,
    const struct cred *);
extern int kill_pgrp(struct pid *pid, int sig, int priv);
extern int kill_pid(struct pid *pid, int sig, int priv);
extern __attribute__((__warn_unused_result__)) bool do_notify_parent(struct task_struct *, int);
extern void __wake_up_parent(struct task_struct *p, struct task_struct *parent);
extern void force_sig(int);
extern void force_fatal_sig(int);
extern void force_exit_sig(int);
extern int send_sig(int, struct task_struct *, int);
extern int zap_other_threads(struct task_struct *p);
extern struct sigqueue *sigqueue_alloc(void);
extern void sigqueue_free(struct sigqueue *);
extern int send_sigqueue(struct sigqueue *, struct pid *, enum pid_type);
extern int do_sigaction(int, struct k_sigaction *, struct k_sigaction *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_notify_signal(void)
{
 clear_ti_thread_flag(__current_thread_info, 7);
 __asm__ __volatile__("": : :"memory");
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __set_notify_signal(struct task_struct *task)
{
 return !test_and_set_tsk_thread_flag(task, 7) &&
        !wake_up_state(task, 0x00000001);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_notify_signal(struct task_struct *task)
{
 if (__set_notify_signal(task))
  kick_process(task);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int restart_syscall(void)
{
 set_tsk_thread_flag((__current_thread_info->task), 2);
 return -513;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_sigpending(struct task_struct *p)
{
 return __builtin_expect(!!(test_tsk_thread_flag(p,2)), 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int signal_pending(struct task_struct *p)
{





 if (__builtin_expect(!!(test_tsk_thread_flag(p, 7)), 0))
  return 1;
 return task_sigpending(p);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __fatal_signal_pending(struct task_struct *p)
{
 return __builtin_expect(!!(sigismember(&p->pending.signal, 9)), 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fatal_signal_pending(struct task_struct *p)
{
 return task_sigpending(p) && __fatal_signal_pending(p);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int signal_pending_state(unsigned int state, struct task_struct *p)
{
 if (!(state & (0x00000001 | 0x00000100)))
  return 0;
 if (!signal_pending(p))
  return 0;

 return (state & 0x00000001) || __fatal_signal_pending(p);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fault_signal_pending(vm_fault_t fault_flags,
     struct pt_regs *regs)
{
 return __builtin_expect(!!((fault_flags & VM_FAULT_RETRY) && (fatal_signal_pending((__current_thread_info->task)) || ((((regs)->hvmer.vmest & (1 << 31)) != 0) && signal_pending((__current_thread_info->task))))), 0);


}







extern void recalc_sigpending(void);
extern void calculate_sigpending(void);

extern void signal_wake_up_state(struct task_struct *t, unsigned int state);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void signal_wake_up(struct task_struct *t, bool fatal)
{
 unsigned int state = 0;
 if (fatal && !(t->jobctl & (1UL << 24))) {
  t->jobctl &= ~((1UL << 26) | (1UL << 27));
  state = 0x00000100 | 0x00000008;
 }
 signal_wake_up_state(t, state);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptrace_signal_wake_up(struct task_struct *t, bool resume)
{
 unsigned int state = 0;
 if (resume) {
  t->jobctl &= ~(1UL << 27);
  state = 0x00000008;
 }
 signal_wake_up_state(t, state);
}

void task_join_group_stop(struct task_struct *task);
# 479 "../include/linux/sched/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_restore_sigmask(void)
{
 set_ti_thread_flag(__current_thread_info, 6);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_tsk_restore_sigmask(struct task_struct *task)
{
 clear_tsk_thread_flag(task, 6);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_restore_sigmask(void)
{
 clear_ti_thread_flag(__current_thread_info, 6);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool test_tsk_restore_sigmask(struct task_struct *task)
{
 return test_tsk_thread_flag(task, 6);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool test_restore_sigmask(void)
{
 return test_ti_thread_flag(__current_thread_info, 6);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool test_and_clear_restore_sigmask(void)
{
 return test_and_clear_ti_thread_flag(__current_thread_info, 6);
}
# 538 "../include/linux/sched/signal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void restore_saved_sigmask(void)
{
 if (test_and_clear_restore_sigmask())
  __set_current_blocked(&(__current_thread_info->task)->saved_sigmask);
}

extern int set_user_sigmask(const sigset_t *umask, size_t sigsetsize);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void restore_saved_sigmask_unless(bool interrupted)
{
 if (interrupted)
  ({ int __ret_warn_on = !!(!signal_pending((__current_thread_info->task))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/sched/signal.h", 549, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 else
  restore_saved_sigmask();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) sigset_t *sigmask_to_save(void)
{
 sigset_t *res = &(__current_thread_info->task)->blocked;
 if (__builtin_expect(!!(test_restore_sigmask()), 0))
  res = &(__current_thread_info->task)->saved_sigmask;
 return res;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kill_cad_pid(int sig, int priv)
{
 return kill_pid(cad_pid, sig, priv);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __on_sig_stack(unsigned long sp)
{




 return sp > (__current_thread_info->task)->sas_ss_sp &&
  sp - (__current_thread_info->task)->sas_ss_sp <= (__current_thread_info->task)->sas_ss_size;

}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int on_sig_stack(unsigned long sp)
{
# 596 "../include/linux/sched/signal.h"
 if ((__current_thread_info->task)->sas_ss_flags & (1U << 31))
  return 0;

 return __on_sig_stack(sp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sas_ss_flags(unsigned long sp)
{
 if (!(__current_thread_info->task)->sas_ss_size)
  return 2;

 return on_sig_stack(sp) ? 1 : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sas_ss_reset(struct task_struct *p)
{
 p->sas_ss_sp = 0;
 p->sas_ss_size = 0;
 p->sas_ss_flags = 2;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long sigsp(unsigned long sp, struct ksignal *ksig)
{
 if (__builtin_expect(!!((ksig->ka.sa.sa_flags & 0x08000000)), 0) && ! sas_ss_flags(sp))



  return (__current_thread_info->task)->sas_ss_sp + (__current_thread_info->task)->sas_ss_size;

 return sp;
}

extern void __cleanup_sighand(struct sighand_struct *);
extern void flush_itimer_signals(void);
# 640 "../include/linux/sched/signal.h"
extern bool current_is_single_threaded(void);
# 663 "../include/linux/sched/signal.h"
typedef int (*proc_visitor)(struct task_struct *p, void *data);
void walk_process_tree(struct task_struct *top, proc_visitor, void *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct pid *task_pid_type(struct task_struct *task, enum pid_type type)
{
 struct pid *pid;
 if (type == PIDTYPE_PID)
  pid = task_pid(task);
 else
  pid = task->signal->pids[type];
 return pid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid *task_tgid(struct task_struct *task)
{
 return task->signal->pids[PIDTYPE_TGID];
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid *task_pgrp(struct task_struct *task)
{
 return task->signal->pids[PIDTYPE_PGID];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid *task_session(struct task_struct *task)
{
 return task->signal->pids[PIDTYPE_SID];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_nr_threads(struct task_struct *task)
{
 return task->signal->nr_threads;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool thread_group_leader(struct task_struct *p)
{
 return p->exit_signal >= 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
bool same_thread_group(struct task_struct *p1, struct task_struct *p2)
{
 return p1->signal == p2->signal;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *__next_thread(struct task_struct *p)
{
 return ({ struct list_head *__head = (&p->signal->thread_head); struct list_head *__ptr = (&p->thread_node); struct list_head *__next = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_215(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(__ptr->next) == sizeof(char) || sizeof(__ptr->next) == sizeof(short) || sizeof(__ptr->next) == sizeof(int) || sizeof(__ptr->next) == sizeof(long)) || sizeof(__ptr->next) == sizeof(long long))) __compiletime_assert_215(); } while (0); (*(const volatile typeof( _Generic((__ptr->next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (__ptr->next))) *)&(__ptr->next)); }); __builtin_expect(!!(__next != __head), 1) ? ({ void *__mptr = (void *)(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_216(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(__next) == sizeof(char) || sizeof(__next) == sizeof(short) || sizeof(__next) == sizeof(int) || sizeof(__next) == sizeof(long)) || sizeof(__next) == sizeof(long long))) __compiletime_assert_216(); } while (0); (*(const volatile typeof( _Generic((__next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (__next))) *)&(__next)); })); _Static_assert(__builtin_types_compatible_p(typeof(*(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_216(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(__next) == sizeof(char) || sizeof(__next) == sizeof(short) || sizeof(__next) == sizeof(int) || sizeof(__next) == sizeof(long)) || sizeof(__next) == sizeof(long long))) __compiletime_assert_216(); } while (0); (*(const volatile typeof( _Generic((__next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (__next))) *)&(__next)); }))), typeof(((struct task_struct *)0)->thread_node)) || __builtin_types_compatible_p(typeof(*(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_216(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(__next) == sizeof(char) || sizeof(__next) == sizeof(short) || sizeof(__next) == sizeof(int) || sizeof(__next) == sizeof(long)) || sizeof(__next) == sizeof(long long))) __compiletime_assert_216(); } while (0); (*(const volatile typeof( _Generic((__next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (__next))) *)&(__next)); }))), typeof(void)), "pointer type mismatch in container_of()"); ((struct task_struct *)(__mptr - __builtin_offsetof(struct task_struct, thread_node))); }) : ((void *)0); });



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *next_thread(struct task_struct *p)
{
 return __next_thread(p) ?: p->group_leader;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int thread_group_empty(struct task_struct *p)
{
 return thread_group_leader(p) &&
        list_is_last(&p->thread_node, &p->signal->thread_head);
}




extern struct sighand_struct *__lock_task_sighand(struct task_struct *task,
       unsigned long *flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sighand_struct *lock_task_sighand(struct task_struct *task,
             unsigned long *flags)
{
 struct sighand_struct *ret;

 ret = __lock_task_sighand(task, flags);
 (void)(ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unlock_task_sighand(struct task_struct *task,
      unsigned long *flags)
{
 spin_unlock_irqrestore(&task->sighand->siglock, *flags);
}


extern void lockdep_assert_task_sighand_held(struct task_struct *task);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long task_rlimit(const struct task_struct *task,
  unsigned int limit)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_217(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(task->signal->rlim[limit].rlim_cur) == sizeof(char) || sizeof(task->signal->rlim[limit].rlim_cur) == sizeof(short) || sizeof(task->signal->rlim[limit].rlim_cur) == sizeof(int) || sizeof(task->signal->rlim[limit].rlim_cur) == sizeof(long)) || sizeof(task->signal->rlim[limit].rlim_cur) == sizeof(long long))) __compiletime_assert_217(); } while (0); (*(const volatile typeof( _Generic((task->signal->rlim[limit].rlim_cur), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (task->signal->rlim[limit].rlim_cur))) *)&(task->signal->rlim[limit].rlim_cur)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long task_rlimit_max(const struct task_struct *task,
  unsigned int limit)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_218(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(task->signal->rlim[limit].rlim_max) == sizeof(char) || sizeof(task->signal->rlim[limit].rlim_max) == sizeof(short) || sizeof(task->signal->rlim[limit].rlim_max) == sizeof(int) || sizeof(task->signal->rlim[limit].rlim_max) == sizeof(long)) || sizeof(task->signal->rlim[limit].rlim_max) == sizeof(long long))) __compiletime_assert_218(); } while (0); (*(const volatile typeof( _Generic((task->signal->rlim[limit].rlim_max), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (task->signal->rlim[limit].rlim_max))) *)&(task->signal->rlim[limit].rlim_max)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long rlimit(unsigned int limit)
{
 return task_rlimit((__current_thread_info->task), limit);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long rlimit_max(unsigned int limit)
{
 return task_rlimit_max((__current_thread_info->task), limit);
}
# 7 "../include/linux/rcuwait.h" 2
# 16 "../include/linux/rcuwait.h"
struct rcuwait {
 struct task_struct *task;
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcuwait_init(struct rcuwait *w)
{
 w->task = ((void *)0);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcuwait_active(struct rcuwait *w)
{
 return !!({ typeof(*(w->task)) *__UNIQUE_ID_rcu219 = (typeof(*(w->task)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_220(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((w->task)) == sizeof(char) || sizeof((w->task)) == sizeof(short) || sizeof((w->task)) == sizeof(int) || sizeof((w->task)) == sizeof(long)) || sizeof((w->task)) == sizeof(long long))) __compiletime_assert_220(); } while (0); (*(const volatile typeof( _Generic(((w->task)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((w->task)))) *)&((w->task))); }); ; ((typeof(*(w->task)) *)(__UNIQUE_ID_rcu219)); });
}

extern int rcuwait_wake_up(struct rcuwait *w);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void prepare_to_rcuwait(struct rcuwait *w)
{
 do { uintptr_t _r_a_p__v = (uintptr_t)((__current_thread_info->task)); ; if (__builtin_constant_p((__current_thread_info->task)) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_221(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((w->task)) == sizeof(char) || sizeof((w->task)) == sizeof(short) || sizeof((w->task)) == sizeof(int) || sizeof((w->task)) == sizeof(long)) || sizeof((w->task)) == sizeof(long long))) __compiletime_assert_221(); } while (0); do { *(volatile typeof((w->task)) *)&((w->task)) = ((typeof(w->task))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_222(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&w->task) == sizeof(char) || sizeof(*&w->task) == sizeof(short) || sizeof(*&w->task) == sizeof(int) || sizeof(*&w->task) == sizeof(long)) || sizeof(*&w->task) == sizeof(long long))) __compiletime_assert_222(); } while (0); do { *(volatile typeof(*&w->task) *)&(*&w->task) = ((typeof(*((typeof(w->task))_r_a_p__v)) *)((typeof(w->task))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
}

extern void finish_rcuwait(struct rcuwait *w);
# 8 "../include/linux/percpu-rwsem.h" 2

# 1 "../include/linux/rcu_sync.h" 1
# 17 "../include/linux/rcu_sync.h"
struct rcu_sync {
 int gp_state;
 int gp_count;
 wait_queue_head_t gp_wait;

 struct callback_head cb_head;
};
# 32 "../include/linux/rcu_sync.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rcu_sync_is_idle(struct rcu_sync *rsp)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_read_lock_any_held()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcu_sync.h", 35, "suspicious rcu_sync_is_idle() usage"); } } while (0);

 return !({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_223(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(rsp->gp_state) == sizeof(char) || sizeof(rsp->gp_state) == sizeof(short) || sizeof(rsp->gp_state) == sizeof(int) || sizeof(rsp->gp_state) == sizeof(long)) || sizeof(rsp->gp_state) == sizeof(long long))) __compiletime_assert_223(); } while (0); (*(const volatile typeof( _Generic((rsp->gp_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (rsp->gp_state))) *)&(rsp->gp_state)); });
}

extern void rcu_sync_init(struct rcu_sync *);
extern void rcu_sync_enter(struct rcu_sync *);
extern void rcu_sync_exit(struct rcu_sync *);
extern void rcu_sync_dtor(struct rcu_sync *);
# 10 "../include/linux/percpu-rwsem.h" 2


struct percpu_rw_semaphore {
 struct rcu_sync rss;
 unsigned int *read_count;
 struct rcuwait writer;
 wait_queue_head_t waiters;
 atomic_t block;

 struct lockdep_map dep_map;

};
# 45 "../include/linux/percpu-rwsem.h"
extern bool __percpu_down_read(struct percpu_rw_semaphore *, bool);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_down_read(struct percpu_rw_semaphore *sem)
{
 do { do { } while (0); } while (0);

 lock_acquire(&sem->dep_map, 0, 0, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0));

 __asm__ __volatile__("": : :"memory");
# 62 "../include/linux/percpu-rwsem.h"
 if (__builtin_expect(!!(rcu_sync_is_idle(&sem->rss)), 1))
  do { do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*sem->read_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 else
  __percpu_down_read(sem, false);




 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
{
 bool ret = true;

 __asm__ __volatile__("": : :"memory");



 if (__builtin_expect(!!(rcu_sync_is_idle(&sem->rss)), 1))
  do { do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*sem->read_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 else
  ret = __percpu_down_read(sem, true);
 __asm__ __volatile__("": : :"memory");





 if (ret)
  lock_acquire(&sem->dep_map, 0, 1, 1, 1, ((void *)0), (unsigned long)__builtin_return_address(0));

 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_up_read(struct percpu_rw_semaphore *sem)
{
 lock_release(&sem->dep_map, (unsigned long)__builtin_return_address(0));

 __asm__ __volatile__("": : :"memory");



 if (__builtin_expect(!!(rcu_sync_is_idle(&sem->rss)), 1)) {
  do { do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*sem->read_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 } else {




  __asm__ __volatile__("": : :"memory");





  do { do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*sem->read_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sem->read_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sem->read_count))) *)(&(*sem->read_count)); }); }) += -(typeof(*sem->read_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
  rcuwait_wake_up(&sem->writer);
 }
 __asm__ __volatile__("": : :"memory");
}

extern bool percpu_is_read_locked(struct percpu_rw_semaphore *);
extern void percpu_down_write(struct percpu_rw_semaphore *);
extern void percpu_up_write(struct percpu_rw_semaphore *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_is_write_locked(struct percpu_rw_semaphore *sem)
{
 return atomic_read(&sem->block);
}

extern int __percpu_init_rwsem(struct percpu_rw_semaphore *,
    const char *, struct lock_class_key *);

extern void percpu_free_rwsem(struct percpu_rw_semaphore *);
# 147 "../include/linux/percpu-rwsem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_rwsem_release(struct percpu_rw_semaphore *sem,
     bool read, unsigned long ip)
{
 lock_release(&sem->dep_map, ip);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_rwsem_acquire(struct percpu_rw_semaphore *sem,
     bool read, unsigned long ip)
{
 lock_acquire(&sem->dep_map, 0, 1, read, 1, ((void *)0), ip);
}
# 34 "../include/linux/fs.h" 2

# 1 "../include/linux/delayed_call.h" 1
# 10 "../include/linux/delayed_call.h"
struct delayed_call {
 void (*fn)(void *);
 void *arg;
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_delayed_call(struct delayed_call *call,
  void (*fn)(void *), void *arg)
{
 call->fn = fn;
 call->arg = arg;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_delayed_call(struct delayed_call *call)
{
 if (call->fn)
  call->fn(call->arg);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_delayed_call(struct delayed_call *call)
{
 call->fn = ((void *)0);
}
# 36 "../include/linux/fs.h" 2
# 1 "../include/linux/uuid.h" 1
# 15 "../include/linux/uuid.h"
typedef struct {
 __u8 b[16];
} guid_t;

typedef struct {
 __u8 b[16];
} uuid_t;
# 43 "../include/linux/uuid.h"
extern const guid_t guid_null;
extern const uuid_t uuid_null;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool guid_equal(const guid_t *u1, const guid_t *u2)
{
 return memcmp(u1, u2, sizeof(guid_t)) == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void guid_copy(guid_t *dst, const guid_t *src)
{
 memcpy(dst, src, sizeof(guid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void import_guid(guid_t *dst, const __u8 *src)
{
 memcpy(dst, src, sizeof(guid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void export_guid(__u8 *dst, const guid_t *src)
{
 memcpy(dst, src, sizeof(guid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool guid_is_null(const guid_t *guid)
{
 return guid_equal(guid, &guid_null);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uuid_equal(const uuid_t *u1, const uuid_t *u2)
{
 return memcmp(u1, u2, sizeof(uuid_t)) == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uuid_copy(uuid_t *dst, const uuid_t *src)
{
 memcpy(dst, src, sizeof(uuid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void import_uuid(uuid_t *dst, const __u8 *src)
{
 memcpy(dst, src, sizeof(uuid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void export_uuid(__u8 *dst, const uuid_t *src)
{
 memcpy(dst, src, sizeof(uuid_t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uuid_is_null(const uuid_t *uuid)
{
 return uuid_equal(uuid, &uuid_null);
}

void generate_random_uuid(unsigned char uuid[16]);
void generate_random_guid(unsigned char guid[16]);

extern void guid_gen(guid_t *u);
extern void uuid_gen(uuid_t *u);

bool __attribute__((__warn_unused_result__)) uuid_is_valid(const char *uuid);

extern const u8 guid_index[16];
extern const u8 uuid_index[16];

int guid_parse(const char *uuid, guid_t *u);
int uuid_parse(const char *uuid, uuid_t *u);
# 37 "../include/linux/fs.h" 2
# 1 "../include/linux/errseq.h" 1







typedef u32 errseq_t;

errseq_t errseq_set(errseq_t *eseq, int err);
errseq_t errseq_sample(errseq_t *eseq);
int errseq_check(errseq_t *eseq, errseq_t since);
int errseq_check_and_advance(errseq_t *eseq, errseq_t *since);
# 38 "../include/linux/fs.h" 2
# 1 "../include/linux/ioprio.h" 1





# 1 "../include/linux/sched/rt.h" 1






struct task_struct;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rt_prio(int prio)
{
 if (__builtin_expect(!!(prio < 100), 0))
  return 1;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rt_task(struct task_struct *p)
{
 return rt_prio(p->prio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_is_realtime(struct task_struct *tsk)
{
 int policy = tsk->policy;

 if (policy == 1 || policy == 2)
  return true;
 if (policy == 6)
  return true;
 return false;
}


extern void rt_mutex_pre_schedule(void);
extern void rt_mutex_schedule(void);
extern void rt_mutex_post_schedule(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *rt_mutex_get_top_task(struct task_struct *p)
{
 return p->pi_top_task;
}
extern void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task);
extern void rt_mutex_adjust_pi(struct task_struct *p);
# 54 "../include/linux/sched/rt.h"
extern void normalize_rt_tasks(void);
# 7 "../include/linux/ioprio.h" 2
# 1 "../include/linux/iocontext.h" 1








enum {
 ICQ_EXITED = 1 << 2,
 ICQ_DESTROYED = 1 << 3,
};
# 73 "../include/linux/iocontext.h"
struct io_cq {
 struct request_queue *q;
 struct io_context *ioc;







 union {
  struct list_head q_node;
  struct kmem_cache *__rcu_icq_cache;
 };
 union {
  struct hlist_node ioc_node;
  struct callback_head __rcu_head;
 };

 unsigned int flags;
};





struct io_context {
 atomic_long_t refcount;
 atomic_t active_ref;

 unsigned short ioprio;
# 115 "../include/linux/iocontext.h"
};

struct task_struct;
# 129 "../include/linux/iocontext.h"
struct io_context;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_io_context(struct io_context *ioc) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void exit_io_context(struct task_struct *task) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_io(unsigned long clone_flags, struct task_struct *tsk)
{
 return 0;
}
# 8 "../include/linux/ioprio.h" 2

# 1 "../include/uapi/linux/ioprio.h" 1
# 28 "../include/uapi/linux/ioprio.h"
enum {
 IOPRIO_CLASS_NONE = 0,
 IOPRIO_CLASS_RT = 1,
 IOPRIO_CLASS_BE = 2,
 IOPRIO_CLASS_IDLE = 3,


 IOPRIO_CLASS_INVALID = 7,
};
# 53 "../include/uapi/linux/ioprio.h"
enum {
 IOPRIO_WHO_PROCESS = 1,
 IOPRIO_WHO_PGRP,
 IOPRIO_WHO_USER,
};
# 83 "../include/uapi/linux/ioprio.h"
enum {

 IOPRIO_HINT_NONE = 0,
# 96 "../include/uapi/linux/ioprio.h"
 IOPRIO_HINT_DEV_DURATION_LIMIT_1 = 1,
 IOPRIO_HINT_DEV_DURATION_LIMIT_2 = 2,
 IOPRIO_HINT_DEV_DURATION_LIMIT_3 = 3,
 IOPRIO_HINT_DEV_DURATION_LIMIT_4 = 4,
 IOPRIO_HINT_DEV_DURATION_LIMIT_5 = 5,
 IOPRIO_HINT_DEV_DURATION_LIMIT_6 = 6,
 IOPRIO_HINT_DEV_DURATION_LIMIT_7 = 7,
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __u16 ioprio_value(int prioclass, int priolevel,
       int priohint)
{
 if (((prioclass) < 0 || (prioclass) >= (8)) ||
     ((priolevel) < 0 || (priolevel) >= ((1 << 3))) ||
     ((priohint) < 0 || (priohint) >= ((1 << 10))))
  return IOPRIO_CLASS_INVALID << 13;

 return (prioclass << 13) |
  (priohint << 3) | priolevel;
}
# 10 "../include/linux/ioprio.h" 2
# 19 "../include/linux/ioprio.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ioprio_valid(unsigned short ioprio)
{
 unsigned short class = (((ioprio) >> 13) & (8 - 1));

 return class > IOPRIO_CLASS_NONE && class <= IOPRIO_CLASS_IDLE;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_nice_ioprio(struct task_struct *task)
{
 return (task_nice(task) + 20) / 5;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_nice_ioclass(struct task_struct *task)
{
 if (task->policy == 5)
  return IOPRIO_CLASS_IDLE;
 else if (task_is_realtime(task))
  return IOPRIO_CLASS_RT;
 else
  return IOPRIO_CLASS_BE;
}
# 75 "../include/linux/ioprio.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __get_task_ioprio(struct task_struct *p)
{
 return ioprio_value(IOPRIO_CLASS_NONE, 0, IOPRIO_HINT_NONE);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_current_ioprio(void)
{
 return __get_task_ioprio((__current_thread_info->task));
}

extern int set_task_ioprio(struct task_struct *task, int ioprio);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ioprio_check_cap(int ioprio)
{
 return -15;
}
# 39 "../include/linux/fs.h" 2
# 1 "../include/linux/fs_types.h" 1
# 71 "../include/linux/fs_types.h"
extern unsigned char fs_ftype_to_dtype(unsigned int filetype);
extern unsigned char fs_umode_to_ftype(umode_t mode);
extern unsigned char fs_umode_to_dtype(umode_t mode);
# 40 "../include/linux/fs.h" 2


# 1 "../include/linux/mount.h" 1
# 14 "../include/linux/mount.h"
# 1 "./arch/hexagon/include/generated/asm/barrier.h" 1
# 15 "../include/linux/mount.h" 2

struct super_block;
struct dentry;
struct user_namespace;
struct mnt_idmap;
struct file_system_type;
struct fs_context;
struct file;
struct path;
# 69 "../include/linux/mount.h"
struct vfsmount {
 struct dentry *mnt_root;
 struct super_block *mnt_sb;
 int mnt_flags;
 struct mnt_idmap *mnt_idmap;
} ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mnt_idmap *mnt_idmap(const struct vfsmount *mnt)
{

 return ({ typeof( _Generic((*&mnt->mnt_idmap), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&mnt->mnt_idmap))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_224(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&mnt->mnt_idmap) == sizeof(char) || sizeof(*&mnt->mnt_idmap) == sizeof(short) || sizeof(*&mnt->mnt_idmap) == sizeof(int) || sizeof(*&mnt->mnt_idmap) == sizeof(long)) || sizeof(*&mnt->mnt_idmap) == sizeof(long long))) __compiletime_assert_224(); } while (0); (*(const volatile typeof( _Generic((*&mnt->mnt_idmap), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&mnt->mnt_idmap))) *)&(*&mnt->mnt_idmap)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&mnt->mnt_idmap))___p1; });
}

extern int mnt_want_write(struct vfsmount *mnt);
extern int mnt_want_write_file(struct file *file);
extern void mnt_drop_write(struct vfsmount *mnt);
extern void mnt_drop_write_file(struct file *file);
extern void mntput(struct vfsmount *mnt);
extern struct vfsmount *mntget(struct vfsmount *mnt);
extern void mnt_make_shortterm(struct vfsmount *mnt);
extern struct vfsmount *mnt_clone_internal(const struct path *path);
extern bool __mnt_is_readonly(struct vfsmount *mnt);
extern bool mnt_may_suid(struct vfsmount *mnt);

extern struct vfsmount *clone_private_mount(const struct path *path);
int mnt_get_write_access(struct vfsmount *mnt);
void mnt_put_write_access(struct vfsmount *mnt);

extern struct vfsmount *fc_mount(struct fs_context *fc);
extern struct vfsmount *vfs_create_mount(struct fs_context *fc);
extern struct vfsmount *vfs_kern_mount(struct file_system_type *type,
          int flags, const char *name,
          void *data);
extern struct vfsmount *vfs_submount(const struct dentry *mountpoint,
         struct file_system_type *type,
         const char *name, void *data);

extern void mnt_set_expiry(struct vfsmount *mnt, struct list_head *expiry_list);
extern void mark_mounts_for_expiry(struct list_head *mounts);

extern bool path_is_mountpoint(const struct path *path);

extern bool our_mnt(struct vfsmount *mnt);

extern struct vfsmount *kern_mount(struct file_system_type *);
extern void kern_unmount(struct vfsmount *mnt);
extern int may_umount_tree(struct vfsmount *);
extern int may_umount(struct vfsmount *);
extern long do_mount(const char *, const char *,
       const char *, unsigned long, void *);
extern struct vfsmount *collect_mounts(const struct path *);
extern void drop_collected_mounts(struct vfsmount *);
extern int iterate_mounts(int (*)(struct vfsmount *, void *), void *,
     struct vfsmount *);
extern void kern_unmount_array(struct vfsmount *mnt[], unsigned int num);

extern int cifs_root_data(char **dev, char **opts);
# 43 "../include/linux/fs.h" 2

# 1 "../include/linux/mnt_idmapping.h" 1







struct mnt_idmap;
struct user_namespace;

extern struct mnt_idmap nop_mnt_idmap;
extern struct user_namespace init_user_ns;

typedef struct {
 uid_t val;
} vfsuid_t;

typedef struct {
 gid_t val;
} vfsgid_t;

_Static_assert(sizeof(vfsuid_t) == sizeof(kuid_t), "sizeof(vfsuid_t) == sizeof(kuid_t)");
_Static_assert(sizeof(vfsgid_t) == sizeof(kgid_t), "sizeof(vfsgid_t) == sizeof(kgid_t)");
_Static_assert(__builtin_offsetof(vfsuid_t, val) == __builtin_offsetof(kuid_t, val), "offsetof(vfsuid_t, val) == offsetof(kuid_t, val)");
_Static_assert(__builtin_offsetof(vfsgid_t, val) == __builtin_offsetof(kgid_t, val), "offsetof(vfsgid_t, val) == offsetof(kgid_t, val)");


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uid_t __vfsuid_val(vfsuid_t uid)
{
 return uid.val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gid_t __vfsgid_val(vfsgid_t gid)
{
 return gid.val;
}
# 49 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsuid_valid(vfsuid_t uid)
{
 return __vfsuid_val(uid) != (uid_t)-1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsgid_valid(vfsgid_t gid)
{
 return __vfsgid_val(gid) != (gid_t)-1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsuid_eq(vfsuid_t left, vfsuid_t right)
{
 return vfsuid_valid(left) && __vfsuid_val(left) == __vfsuid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsgid_eq(vfsgid_t left, vfsgid_t right)
{
 return vfsgid_valid(left) && __vfsgid_val(left) == __vfsgid_val(right);
}
# 79 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsuid_eq_kuid(vfsuid_t vfsuid, kuid_t kuid)
{
 return vfsuid_valid(vfsuid) && __vfsuid_val(vfsuid) == __kuid_val(kuid);
}
# 94 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsgid_eq_kgid(vfsgid_t vfsgid, kgid_t kgid)
{
 return vfsgid_valid(vfsgid) && __vfsgid_val(vfsgid) == __kgid_val(kgid);
}
# 116 "../include/linux/mnt_idmapping.h"
int vfsgid_in_group_p(vfsgid_t vfsgid);

struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap);
void mnt_idmap_put(struct mnt_idmap *idmap);

vfsuid_t make_vfsuid(struct mnt_idmap *idmap,
       struct user_namespace *fs_userns, kuid_t kuid);

vfsgid_t make_vfsgid(struct mnt_idmap *idmap,
       struct user_namespace *fs_userns, kgid_t kgid);

kuid_t from_vfsuid(struct mnt_idmap *idmap,
     struct user_namespace *fs_userns, vfsuid_t vfsuid);

kgid_t from_vfsgid(struct mnt_idmap *idmap,
     struct user_namespace *fs_userns, vfsgid_t vfsgid);
# 145 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsuid_has_fsmapping(struct mnt_idmap *idmap,
     struct user_namespace *fs_userns,
     vfsuid_t vfsuid)
{
 return uid_valid(from_vfsuid(idmap, fs_userns, vfsuid));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsuid_has_mapping(struct user_namespace *userns,
          vfsuid_t vfsuid)
{
 return from_kuid(userns, (kuid_t){ __vfsuid_val(vfsuid) }) != (uid_t)-1;
}
# 166 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kuid_t vfsuid_into_kuid(vfsuid_t vfsuid)
{
 return (kuid_t){ __vfsuid_val(vfsuid) };
}
# 183 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsgid_has_fsmapping(struct mnt_idmap *idmap,
     struct user_namespace *fs_userns,
     vfsgid_t vfsgid)
{
 return gid_valid(from_vfsgid(idmap, fs_userns, vfsgid));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfsgid_has_mapping(struct user_namespace *userns,
          vfsgid_t vfsgid)
{
 return from_kgid(userns, (kgid_t){ __vfsgid_val(vfsgid) }) != (gid_t)-1;
}
# 204 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kgid_t vfsgid_into_kgid(vfsgid_t vfsgid)
{
 return (kgid_t){ __vfsgid_val(vfsgid) };
}
# 222 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kuid_t mapped_fsuid(struct mnt_idmap *idmap,
      struct user_namespace *fs_userns)
{
 return from_vfsuid(idmap, fs_userns, (vfsuid_t){ __kuid_val((({ ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((1))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/mnt_idmapping.h", 225, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*((__current_thread_info->task)->cred)) *)(((__current_thread_info->task)->cred))); })->fsuid; }))) });
}
# 241 "../include/linux/mnt_idmapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kgid_t mapped_fsgid(struct mnt_idmap *idmap,
      struct user_namespace *fs_userns)
{
 return from_vfsgid(idmap, fs_userns, (vfsgid_t){ __kgid_val((({ ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((1))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/mnt_idmapping.h", 244, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*((__current_thread_info->task)->cred)) *)(((__current_thread_info->task)->cred))); })->fsgid; }))) });
}
# 45 "../include/linux/fs.h" 2
# 1 "../include/linux/slab.h" 1
# 20 "../include/linux/slab.h"
# 1 "../include/linux/percpu-refcount.h" 1
# 59 "../include/linux/percpu-refcount.h"
struct percpu_ref;
typedef void (percpu_ref_func_t)(struct percpu_ref *);


enum {
 __PERCPU_REF_ATOMIC = 1LU << 0,
 __PERCPU_REF_DEAD = 1LU << 1,
 __PERCPU_REF_ATOMIC_DEAD = __PERCPU_REF_ATOMIC | __PERCPU_REF_DEAD,

 __PERCPU_REF_FLAG_BITS = 2,
};


enum {







 PERCPU_REF_INIT_ATOMIC = 1 << 0,






 PERCPU_REF_INIT_DEAD = 1 << 1,




 PERCPU_REF_ALLOW_REINIT = 1 << 2,
};

struct percpu_ref_data {
 atomic_long_t count;
 percpu_ref_func_t *release;
 percpu_ref_func_t *confirm_switch;
 bool force_atomic:1;
 bool allow_reinit:1;
 struct callback_head rcu;
 struct percpu_ref *ref;
};

struct percpu_ref {




 unsigned long percpu_count_ptr;







 struct percpu_ref_data *data;
};

int __attribute__((__warn_unused_result__)) percpu_ref_init(struct percpu_ref *ref,
     percpu_ref_func_t *release, unsigned int flags,
     gfp_t gfp);
void percpu_ref_exit(struct percpu_ref *ref);
void percpu_ref_switch_to_atomic(struct percpu_ref *ref,
     percpu_ref_func_t *confirm_switch);
void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref);
void percpu_ref_switch_to_percpu(struct percpu_ref *ref);
void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
     percpu_ref_func_t *confirm_kill);
void percpu_ref_resurrect(struct percpu_ref *ref);
void percpu_ref_reinit(struct percpu_ref *ref);
bool percpu_ref_is_zero(struct percpu_ref *ref);
# 147 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_ref_kill(struct percpu_ref *ref)
{
 percpu_ref_kill_and_confirm(ref, ((void *)0));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __ref_is_percpu(struct percpu_ref *ref,
       unsigned long **percpu_countp)
{
 unsigned long percpu_ptr;
# 174 "../include/linux/percpu-refcount.h"
 percpu_ptr = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_225(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ref->percpu_count_ptr) == sizeof(char) || sizeof(ref->percpu_count_ptr) == sizeof(short) || sizeof(ref->percpu_count_ptr) == sizeof(int) || sizeof(ref->percpu_count_ptr) == sizeof(long)) || sizeof(ref->percpu_count_ptr) == sizeof(long long))) __compiletime_assert_225(); } while (0); (*(const volatile typeof( _Generic((ref->percpu_count_ptr), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ref->percpu_count_ptr))) *)&(ref->percpu_count_ptr)); });







 if (__builtin_expect(!!(percpu_ptr & __PERCPU_REF_ATOMIC_DEAD), 0))
  return false;

 *percpu_countp = (unsigned long *)percpu_ptr;
 return true;
}
# 198 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
{
 unsigned long *percpu_count;

 rcu_read_lock();

 if (__ref_is_percpu(ref, &percpu_count))
  do { do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*percpu_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 else
  atomic_long_add(nr, &ref->data->count);

 rcu_read_unlock();
}
# 220 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_ref_get(struct percpu_ref *ref)
{
 percpu_ref_get_many(ref, 1);
}
# 235 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_ref_tryget_many(struct percpu_ref *ref,
       unsigned long nr)
{
 unsigned long *percpu_count;
 bool ret;

 rcu_read_lock();

 if (__ref_is_percpu(ref, &percpu_count)) {
  do { do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*percpu_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += nr; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
  ret = true;
 } else {
  ret = atomic_long_add_unless(&ref->data->count, nr, 0);
 }

 rcu_read_unlock();

 return ret;
}
# 264 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_ref_tryget(struct percpu_ref *ref)
{
 return percpu_ref_tryget_many(ref, 1);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_ref_tryget_live_rcu(struct percpu_ref *ref)
{
 unsigned long *percpu_count;
 bool ret = false;

 ({ bool __ret_do_once = !!(!rcu_read_lock_held()); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/percpu-refcount.h", 280, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 if (__builtin_expect(!!(__ref_is_percpu(ref, &percpu_count)), 1)) {
  do { do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*percpu_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
  ret = true;
 } else if (!(ref->percpu_count_ptr & __PERCPU_REF_DEAD)) {
  ret = atomic_long_inc_not_zero(&ref->data->count);
 }
 return ret;
}
# 306 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_ref_tryget_live(struct percpu_ref *ref)
{
 bool ret = false;

 rcu_read_lock();
 ret = percpu_ref_tryget_live_rcu(ref);
 rcu_read_unlock();
 return ret;
}
# 326 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
{
 unsigned long *percpu_count;

 rcu_read_lock();

 if (__ref_is_percpu(ref, &percpu_count))
  do { do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*percpu_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += -(typeof(*percpu_count))(nr); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += -(typeof(*percpu_count))(nr); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += -(typeof(*percpu_count))(nr); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*percpu_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*percpu_count))) *)(&(*percpu_count)); }); }) += -(typeof(*percpu_count))(nr); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 else if (__builtin_expect(!!(atomic_long_sub_and_test(nr, &ref->data->count)), 0))
  ref->data->release(ref);

 rcu_read_unlock();
}
# 349 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void percpu_ref_put(struct percpu_ref *ref)
{
 percpu_ref_put_many(ref, 1);
}
# 363 "../include/linux/percpu-refcount.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool percpu_ref_is_dying(struct percpu_ref *ref)
{
 return ref->percpu_count_ptr & __PERCPU_REF_DEAD;
}
# 21 "../include/linux/slab.h" 2



enum _slab_flag_bits {
 _SLAB_CONSISTENCY_CHECKS,
 _SLAB_RED_ZONE,
 _SLAB_POISON,
 _SLAB_KMALLOC,
 _SLAB_HWCACHE_ALIGN,
 _SLAB_CACHE_DMA,
 _SLAB_CACHE_DMA32,
 _SLAB_STORE_USER,
 _SLAB_PANIC,
 _SLAB_TYPESAFE_BY_RCU,
 _SLAB_TRACE,

 _SLAB_DEBUG_OBJECTS,

 _SLAB_NOLEAKTRACE,
 _SLAB_NO_MERGE,
# 50 "../include/linux/slab.h"
 _SLAB_NO_USER_FLAGS,






 _SLAB_OBJECT_POISON,
 _SLAB_CMPXCHG_DOUBLE,



 _SLAB_FLAGS_LAST_BIT
};
# 228 "../include/linux/slab.h"
# 1 "../include/linux/kasan.h" 1





# 1 "../include/linux/kasan-enabled.h" 1




# 1 "../include/linux/static_key.h" 1
# 6 "../include/linux/kasan-enabled.h" 2
# 23 "../include/linux/kasan-enabled.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_enabled(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_hw_tags_enabled(void)
{
 return false;
}
# 7 "../include/linux/kasan.h" 2
# 1 "../include/linux/kasan-tags.h" 1
# 8 "../include/linux/kasan.h" 2

# 1 "../include/linux/static_key.h" 1
# 10 "../include/linux/kasan.h" 2


struct kmem_cache;
struct page;
struct slab;
struct vm_struct;
struct task_struct;
# 25 "../include/linux/kasan.h"
typedef unsigned int kasan_vmalloc_flags_t;
# 77 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kasan_add_zero_shadow(void *start, unsigned long size)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_remove_zero_shadow(void *start,
     unsigned long size)
{}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_enable_current(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_disable_current(void) {}
# 96 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_has_integrated_init(void)
{
 return kasan_hw_tags_enabled();
}
# 356 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_unpoison_range(const void *address, size_t size) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_poison_pages(struct page *page, unsigned int order,
          bool init) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_unpoison_pages(struct page *page, unsigned int order,
     bool init)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_poison_slab(struct slab *slab) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_unpoison_new_object(struct kmem_cache *cache,
     void *object) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_poison_new_object(struct kmem_cache *cache,
     void *object) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_init_slab_obj(struct kmem_cache *cache,
    const void *object)
{
 return (void *)object;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_slab_free(struct kmem_cache *s, void *object, bool init)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_kfree_large(void *ptr) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_slab_alloc(struct kmem_cache *s, void *object,
       gfp_t flags, bool init)
{
 return object;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_kmalloc(struct kmem_cache *s, const void *object,
    size_t size, gfp_t flags)
{
 return (void *)object;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
{
 return (void *)ptr;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_krealloc(const void *object, size_t new_size,
     gfp_t flags)
{
 return (void *)object;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_mempool_poison_pages(struct page *page, unsigned int order)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_mempool_unpoison_pages(struct page *page, unsigned int order) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_mempool_poison_object(void *ptr)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_mempool_unpoison_object(void *ptr, size_t size) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kasan_check_byte(const void *address)
{
 return true;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_unpoison_task_stack(struct task_struct *task) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_unpoison_task_stack_below(const void *watermark) {}
# 443 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t kasan_metadata_size(struct kmem_cache *cache,
      bool in_object)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_cache_create(struct kmem_cache *cache,
          unsigned int *size,
          slab_flags_t *flags) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_cache_shrink(struct kmem_cache *cache) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_cache_shutdown(struct kmem_cache *cache) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_record_aux_stack(void *ptr) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_record_aux_stack_noalloc(void *ptr) {}
# 479 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_reset_tag(const void *addr)
{
 return (void *)addr;
}
# 495 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_init_sw_tags(void) { }






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_init_hw_tags_cpu(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_init_hw_tags(void) { }
# 554 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_populate_early_vm_area_shadow(void *start,
             unsigned long size) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kasan_populate_vmalloc(unsigned long start,
     unsigned long size)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_release_vmalloc(unsigned long start,
      unsigned long end,
      unsigned long free_region_start,
      unsigned long free_region_end) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kasan_unpoison_vmalloc(const void *start,
        unsigned long size,
        kasan_vmalloc_flags_t flags)
{
 return (void *)start;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }
# 590 "../include/linux/kasan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_free_module_shadow(const struct vm_struct *vm) {}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kasan_non_canonical_hook(unsigned long addr) { }
# 229 "../include/linux/slab.h" 2

struct list_lru;
struct mem_cgroup;



bool slab_is_available(void);

struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
   unsigned int align, slab_flags_t flags,
   void (*ctor)(void *));
struct kmem_cache *kmem_cache_create_usercopy(const char *name,
   unsigned int size, unsigned int align,
   slab_flags_t flags,
   unsigned int useroffset, unsigned int usersize,
   void (*ctor)(void *));
void kmem_cache_destroy(struct kmem_cache *s);
int kmem_cache_shrink(struct kmem_cache *s);
# 274 "../include/linux/slab.h"
void * __attribute__((__warn_unused_result__)) krealloc_noprof(const void *objp, size_t new_size,
        gfp_t flags) __attribute__((__alloc_size__(2)));


void kfree(const void *objp);
void kfree_sensitive(const void *objp);
size_t __ksize(const void *objp);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_kfree(void *p) { void * _T = *(void * *)p; if (!IS_ERR_OR_NULL(_T)) kfree(_T); }
# 296 "../include/linux/slab.h"
size_t ksize(const void *objp);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kmem_dump_obj(void *object) { return false; }
# 337 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int arch_slab_minalign(void)
{
 return __alignof__(unsigned long long);
}
# 405 "../include/linux/slab.h"
enum kmalloc_cache_type {
 KMALLOC_NORMAL = 0,

 KMALLOC_DMA = KMALLOC_NORMAL,


 KMALLOC_CGROUP = KMALLOC_NORMAL,

 KMALLOC_RANDOM_START = KMALLOC_NORMAL,
 KMALLOC_RANDOM_END = KMALLOC_RANDOM_START + 0,

 KMALLOC_RECLAIM = KMALLOC_NORMAL,
# 426 "../include/linux/slab.h"
 NR_KMALLOC_TYPES
};

typedef struct kmem_cache * kmem_buckets[(14 + 1) + 1];

extern kmem_buckets kmalloc_caches[NR_KMALLOC_TYPES];
# 441 "../include/linux/slab.h"
extern unsigned long random_kmalloc_seed;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) enum kmalloc_cache_type kmalloc_type(gfp_t flags, unsigned long caller)
{




 if (__builtin_expect(!!((flags & ((( gfp_t)((((1UL))) << (___GFP_RECLAIMABLE_BIT))) | (0 ? (( gfp_t)((((1UL))) << (___GFP_DMA_BIT))) : 0) | (0 ? (( gfp_t)((((1UL))) << (___GFP_ACCOUNT_BIT))) : 0))) == 0), 1))





  return KMALLOC_NORMAL;
# 465 "../include/linux/slab.h"
 if (0 && (flags & (( gfp_t)((((1UL))) << (___GFP_DMA_BIT)))))
  return KMALLOC_DMA;
 if (!0 || (flags & (( gfp_t)((((1UL))) << (___GFP_RECLAIMABLE_BIT)))))
  return KMALLOC_RECLAIM;
 else
  return KMALLOC_CGROUP;
}
# 486 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned int __kmalloc_index(size_t size,
          bool size_is_constant)
{
 if (!size)
  return 0;

 if (size <= (1 << (5)))
  return ( __builtin_constant_p((1 << (5))) ? (((1 << (5))) < 2 ? 0 : 63 - __builtin_clzll((1 << (5)))) : (sizeof((1 << (5))) <= 4) ? __ilog2_u32((1 << (5))) : __ilog2_u64((1 << (5))) );

 if ((1 << (5)) <= 32 && size > 64 && size <= 96)
  return 1;
 if ((1 << (5)) <= 64 && size > 128 && size <= 192)
  return 2;
 if (size <= 8) return 3;
 if (size <= 16) return 4;
 if (size <= 32) return 5;
 if (size <= 64) return 6;
 if (size <= 128) return 7;
 if (size <= 256) return 8;
 if (size <= 512) return 9;
 if (size <= 1024) return 10;
 if (size <= 2 * 1024) return 11;
 if (size <= 4 * 1024) return 12;
 if (size <= 8 * 1024) return 13;
 if (size <= 16 * 1024) return 14;
 if (size <= 32 * 1024) return 15;
 if (size <= 64 * 1024) return 16;
 if (size <= 128 * 1024) return 17;
 if (size <= 256 * 1024) return 18;
 if (size <= 512 * 1024) return 19;
 if (size <= 1024 * 1024) return 20;
 if (size <= 2 * 1024 * 1024) return 21;

 if (!0 && size_is_constant)
  do { __attribute__((__noreturn__)) extern void __compiletime_assert_226(void) __attribute__((__error__("unexpected size in kmalloc_index()"))); if (!(!(1))) __compiletime_assert_226(); } while (0);
 else
  do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/slab.h", 522, __func__); }); do { } while (0); panic("BUG!"); } while (0);


 return -1;
}
_Static_assert(14 <= 20, "PAGE_SHIFT <= 20");
# 542 "../include/linux/slab.h"
void *kmem_cache_alloc_noprof(struct kmem_cache *cachep,
         gfp_t flags) __attribute__((__assume_aligned__(__alignof__(unsigned long long)))) __attribute__((__malloc__));


void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru,
       gfp_t gfpflags) __attribute__((__assume_aligned__(__alignof__(unsigned long long)))) __attribute__((__malloc__));


void kmem_cache_free(struct kmem_cache *s, void *objp);

kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags,
      unsigned int useroffset, unsigned int usersize,
      void (*ctor)(void *));
# 563 "../include/linux/slab.h"
void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);

int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, void **p);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void kfree_bulk(size_t size, void **p)
{
 kmem_cache_free_bulk(((void *)0), size, p);
}

void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags,
       int node) __attribute__((__assume_aligned__(__alignof__(unsigned long long)))) __attribute__((__malloc__));
# 598 "../include/linux/slab.h"
void *__kmalloc_noprof(size_t size, gfp_t flags)
    __attribute__((__assume_aligned__((1 << (5))))) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));

void *__kmalloc_node_noprof(size_t (size), gfp_t flags, int node)
    __attribute__((__assume_aligned__((1 << (5))))) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));

void *__kmalloc_cache_noprof(struct kmem_cache *s, gfp_t flags, size_t size)
    __attribute__((__assume_aligned__((1 << (5))))) __attribute__((__alloc_size__(3))) __attribute__((__malloc__));

void *__kmalloc_cache_node_noprof(struct kmem_cache *s, gfp_t gfpflags,
      int node, size_t size)
    __attribute__((__assume_aligned__((1 << (5))))) __attribute__((__alloc_size__(4))) __attribute__((__malloc__));

void *__kmalloc_large_noprof(size_t size, gfp_t flags)
    __attribute__((__assume_aligned__((1UL << 14)))) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));

void *__kmalloc_large_node_noprof(size_t size, gfp_t flags, int node)
    __attribute__((__assume_aligned__((1UL << 14)))) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));
# 672 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__alloc_size__(1))) __attribute__((__malloc__)) void *kmalloc_noprof(size_t size, gfp_t flags)
{
 if (__builtin_constant_p(size) && size) {
  unsigned int index;

  if (size > (1UL << (14 + 1)))
   return __kmalloc_large_noprof(size, flags);

  index = __kmalloc_index(size, true);
  return __kmalloc_cache_noprof(
    kmalloc_caches[kmalloc_type(flags, (unsigned long)__builtin_return_address(0))][index],
    flags, size);
 }
 return __kmalloc_noprof(size, flags);
}
# 695 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__alloc_size__(1))) __attribute__((__malloc__)) void *kmalloc_node_noprof(size_t size, gfp_t flags, int node)
{
 if (__builtin_constant_p(size) && size) {
  unsigned int index;

  if (size > (1UL << (14 + 1)))
   return __kmalloc_large_node_noprof(size, flags, node);

  index = __kmalloc_index(size, true);
  return __kmalloc_cache_node_noprof(
    kmalloc_caches[kmalloc_type(flags, (unsigned long)__builtin_return_address(0))][index],
    flags, node, size);
 }
 return __kmalloc_node_noprof((size), flags, node);
}
# 718 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(1, 2))) __attribute__((__malloc__)) void *kmalloc_array_noprof(size_t n, size_t size, gfp_t flags)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(n, size, &bytes))), 0))
  return ((void *)0);
 if (__builtin_constant_p(n) && __builtin_constant_p(size))
  return kmalloc_noprof(bytes, flags);
 return kmalloc_noprof(bytes, flags);
}
# 737 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(2, 3))) void * __attribute__((__warn_unused_result__)) krealloc_array_noprof(void *p,
               size_t new_n,
               size_t new_size,
               gfp_t flags)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(new_n, new_size, &bytes))), 0))
  return ((void *)0);

 return krealloc_noprof(p, bytes, flags);
}
# 759 "../include/linux/slab.h"
void *__kmalloc_node_track_caller_noprof(size_t (size), gfp_t flags, int node,
      unsigned long caller) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));
# 779 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(1, 2))) __attribute__((__malloc__)) void *kmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags,
         int node)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(n, size, &bytes))), 0))
  return ((void *)0);
 if (__builtin_constant_p(n) && __builtin_constant_p(size))
  return kmalloc_node_noprof(bytes, flags, node);
 return __kmalloc_node_noprof((bytes), flags, node);
}
# 805 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(1))) __attribute__((__malloc__)) void *kzalloc_noprof(size_t size, gfp_t flags)
{
 return kmalloc_noprof(size, flags | (( gfp_t)((((1UL))) << (___GFP_ZERO_BIT))));
}



void *__kvmalloc_node_noprof(size_t (size), gfp_t flags, int node) __attribute__((__alloc_size__(1))) __attribute__((__malloc__));
# 825 "../include/linux/slab.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__alloc_size__(1, 2))) __attribute__((__malloc__)) void *
kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
{
 size_t bytes;

 if (__builtin_expect(!!(__must_check_overflow(__builtin_mul_overflow(n, size, &bytes))), 0))
  return ((void *)0);

 return __kvmalloc_node_noprof((bytes), flags, node);
}
# 844 "../include/linux/slab.h"
extern void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsize, gfp_t flags)
        __attribute__((__alloc_size__(3)));


extern void kvfree(const void *addr);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_kvfree(void *p) { void * _T = *(void * *)p; if (!IS_ERR_OR_NULL(_T)) kvfree(_T); }

extern void kvfree_sensitive(const void *addr, size_t len);

unsigned int kmem_cache_size(struct kmem_cache *s);
# 869 "../include/linux/slab.h"
size_t kmalloc_size_roundup(size_t size);

void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) kmem_cache_init_late(void);
# 46 "../include/linux/fs.h" 2

# 1 "../include/linux/rw_hint.h" 1
# 10 "../include/linux/rw_hint.h"
enum rw_hint {
 WRITE_LIFE_NOT_SET = 0,
 WRITE_LIFE_NONE = 1,
 WRITE_LIFE_SHORT = 2,
 WRITE_LIFE_MEDIUM = 3,
 WRITE_LIFE_LONG = 4,
 WRITE_LIFE_EXTREME = 5,
} __attribute__((__packed__));



_Static_assert(sizeof(enum rw_hint) == 1, "sizeof(enum rw_hint) == 1");
# 48 "../include/linux/fs.h" 2


# 1 "../include/uapi/linux/fs.h" 1
# 54 "../include/uapi/linux/fs.h"
struct file_clone_range {
 __s64 src_fd;
 __u64 src_offset;
 __u64 src_length;
 __u64 dest_offset;
};

struct fstrim_range {
 __u64 start;
 __u64 len;
 __u64 minlen;
};
# 75 "../include/uapi/linux/fs.h"
struct fsuuid2 {
 __u8 len;
 __u8 uuid[16];
};

struct fs_sysfs_path {
 __u8 len;
 __u8 name[128];
};






struct file_dedupe_range_info {
 __s64 dest_fd;
 __u64 dest_offset;
 __u64 bytes_deduped;






 __s32 status;
 __u32 reserved;
};


struct file_dedupe_range {
 __u64 src_offset;
 __u64 src_length;
 __u16 dest_count;
 __u16 reserved1;
 __u32 reserved2;
 struct file_dedupe_range_info info[];
};


struct files_stat_struct {
 unsigned long nr_files;
 unsigned long nr_free_files;
 unsigned long max_files;
};

struct inodes_stat_t {
 long nr_inodes;
 long nr_unused;
 long dummy[5];
};







struct fsxattr {
 __u32 fsx_xflags;
 __u32 fsx_extsize;
 __u32 fsx_nextents;
 __u32 fsx_projid;
 __u32 fsx_cowextsize;
 unsigned char fsx_pad[8];
};
# 312 "../include/uapi/linux/fs.h"
typedef int __kernel_rwf_t;
# 360 "../include/uapi/linux/fs.h"
struct page_region {
 __u64 start;
 __u64 end;
 __u64 categories;
};
# 386 "../include/uapi/linux/fs.h"
struct pm_scan_arg {
 __u64 size;
 __u64 flags;
 __u64 start;
 __u64 end;
 __u64 walk_end;
 __u64 vec;
 __u64 vec_len;
 __u64 max_pages;
 __u64 category_inverted;
 __u64 category_mask;
 __u64 category_anyof_mask;
 __u64 return_mask;
};




enum procmap_query_flags {
# 418 "../include/uapi/linux/fs.h"
 PROCMAP_QUERY_VMA_READABLE = 0x01,
 PROCMAP_QUERY_VMA_WRITABLE = 0x02,
 PROCMAP_QUERY_VMA_EXECUTABLE = 0x04,
 PROCMAP_QUERY_VMA_SHARED = 0x08,
# 434 "../include/uapi/linux/fs.h"
 PROCMAP_QUERY_COVERING_OR_NEXT_VMA = 0x10,
 PROCMAP_QUERY_FILE_BACKED_VMA = 0x20,
};
# 461 "../include/uapi/linux/fs.h"
struct procmap_query {

 __u64 size;






 __u64 query_flags;







 __u64 query_addr;

 __u64 vma_start;
 __u64 vma_end;

 __u64 vma_flags;

 __u64 vma_page_size;





 __u64 vma_offset;

 __u64 inode;

 __u32 dev_major;
 __u32 dev_minor;
# 516 "../include/uapi/linux/fs.h"
 __u32 vma_name_size;
# 536 "../include/uapi/linux/fs.h"
 __u32 build_id_size;







 __u64 vma_name_addr;







 __u64 build_id_addr;
};
# 51 "../include/linux/fs.h" 2

struct backing_dev_info;
struct bdi_writeback;
struct bio;
struct io_comp_batch;
struct export_operations;
struct fiemap_extent_info;
struct hd_geometry;
struct iovec;
struct kiocb;
struct kobject;
struct pipe_inode_info;
struct poll_table_struct;
struct kstatfs;
struct vm_area_struct;
struct vfsmount;
struct cred;
struct swap_info_struct;
struct seq_file;
struct workqueue_struct;
struct iov_iter;
struct fscrypt_inode_info;
struct fscrypt_operations;
struct fsverity_info;
struct fsverity_operations;
struct fsnotify_mark_connector;
struct fsnotify_sb_info;
struct fs_context;
struct fs_parameter_spec;
struct fileattr;
struct iomap_ops;

extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) inode_init(void);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) inode_init_early(void);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) files_init(void);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) files_maxfiles_init(void);

extern unsigned long get_max_files(void);
extern unsigned int sysctl_nr_open;

typedef __kernel_rwf_t rwf_t;

struct buffer_head;
typedef int (get_block_t)(struct inode *inode, sector_t iblock,
   struct buffer_head *bh_result, int create);
typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
   ssize_t bytes, void *private);
# 230 "../include/linux/fs.h"
struct iattr {
 unsigned int ia_valid;
 umode_t ia_mode;
# 245 "../include/linux/fs.h"
 union {
  kuid_t ia_uid;
  vfsuid_t ia_vfsuid;
 };
 union {
  kgid_t ia_gid;
  vfsgid_t ia_vfsgid;
 };
 loff_t ia_size;
 struct timespec64 ia_atime;
 struct timespec64 ia_mtime;
 struct timespec64 ia_ctime;






 struct file *ia_file;
};




# 1 "../include/linux/quota.h" 1
# 42 "../include/linux/quota.h"
# 1 "../include/uapi/linux/dqblk_xfs.h" 1
# 53 "../include/uapi/linux/dqblk_xfs.h"
typedef struct fs_disk_quota {
 __s8 d_version;
 __s8 d_flags;
 __u16 d_fieldmask;
 __u32 d_id;
 __u64 d_blk_hardlimit;
 __u64 d_blk_softlimit;
 __u64 d_ino_hardlimit;
 __u64 d_ino_softlimit;
 __u64 d_bcount;
 __u64 d_icount;
 __s32 d_itimer;


 __s32 d_btimer;
 __u16 d_iwarns;
 __u16 d_bwarns;
 __s8 d_itimer_hi;
 __s8 d_btimer_hi;
 __s8 d_rtbtimer_hi;
 __s8 d_padding2;
 __u64 d_rtb_hardlimit;
 __u64 d_rtb_softlimit;
 __u64 d_rtbcount;
 __s32 d_rtbtimer;
 __u16 d_rtbwarns;
 __s16 d_padding3;
 char d_padding4[8];
} fs_disk_quota_t;
# 159 "../include/uapi/linux/dqblk_xfs.h"
typedef struct fs_qfilestat {
 __u64 qfs_ino;
 __u64 qfs_nblks;
 __u32 qfs_nextents;
} fs_qfilestat_t;

typedef struct fs_quota_stat {
 __s8 qs_version;
 __u16 qs_flags;
 __s8 qs_pad;
 fs_qfilestat_t qs_uquota;
 fs_qfilestat_t qs_gquota;
 __u32 qs_incoredqs;
 __s32 qs_btimelimit;
 __s32 qs_itimelimit;
 __s32 qs_rtbtimelimit;
 __u16 qs_bwarnlimit;
 __u16 qs_iwarnlimit;
} fs_quota_stat_t;
# 202 "../include/uapi/linux/dqblk_xfs.h"
struct fs_qfilestatv {
 __u64 qfs_ino;
 __u64 qfs_nblks;
 __u32 qfs_nextents;
 __u32 qfs_pad;
};

struct fs_quota_statv {
 __s8 qs_version;
 __u8 qs_pad1;
 __u16 qs_flags;
 __u32 qs_incoredqs;
 struct fs_qfilestatv qs_uquota;
 struct fs_qfilestatv qs_gquota;
 struct fs_qfilestatv qs_pquota;
 __s32 qs_btimelimit;
 __s32 qs_itimelimit;
 __s32 qs_rtbtimelimit;
 __u16 qs_bwarnlimit;
 __u16 qs_iwarnlimit;
 __u16 qs_rtbwarnlimit;
 __u16 qs_pad3;
 __u32 qs_pad4;
 __u64 qs_pad2[7];
};
# 43 "../include/linux/quota.h" 2
# 1 "../include/linux/dqblk_v1.h" 1
# 44 "../include/linux/quota.h" 2
# 1 "../include/linux/dqblk_v2.h" 1








# 1 "../include/linux/dqblk_qtree.h" 1
# 18 "../include/linux/dqblk_qtree.h"
struct dquot;
struct kqid;


struct qtree_fmt_operations {
 void (*mem2disk_dqblk)(void *disk, struct dquot *dquot);
 void (*disk2mem_dqblk)(struct dquot *dquot, void *disk);
 int (*is_id)(void *disk, struct dquot *dquot);
};


struct qtree_mem_dqinfo {
 struct super_block *dqi_sb;
 int dqi_type;
 unsigned int dqi_blocks;
 unsigned int dqi_free_blk;
 unsigned int dqi_free_entry;
 unsigned int dqi_blocksize_bits;
 unsigned int dqi_entry_size;
 unsigned int dqi_usable_bs;
 unsigned int dqi_qtree_depth;
 const struct qtree_fmt_operations *dqi_ops;
};

int qtree_write_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot);
int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot);
int qtree_delete_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot);
int qtree_release_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot);
int qtree_entry_unused(struct qtree_mem_dqinfo *info, char *disk);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int qtree_depth(struct qtree_mem_dqinfo *info)
{
 unsigned int epb = info->dqi_usable_bs >> 2;
 unsigned long long entries = epb;
 int i;

 for (i = 1; entries < (1ULL << 32); i++)
  entries *= epb;
 return i;
}
int qtree_get_next_id(struct qtree_mem_dqinfo *info, struct kqid *qid);
# 10 "../include/linux/dqblk_v2.h" 2
# 45 "../include/linux/quota.h" 2



# 1 "../include/linux/projid.h" 1
# 17 "../include/linux/projid.h"
struct user_namespace;
extern struct user_namespace init_user_ns;

typedef __kernel_uid32_t projid_t;

typedef struct {
 projid_t val;
} kprojid_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) projid_t __kprojid_val(kprojid_t projid)
{
 return projid.val;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool projid_eq(kprojid_t left, kprojid_t right)
{
 return __kprojid_val(left) == __kprojid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool projid_lt(kprojid_t left, kprojid_t right)
{
 return __kprojid_val(left) < __kprojid_val(right);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool projid_valid(kprojid_t projid)
{
 return !projid_eq(projid, (kprojid_t){ -1 });
}
# 65 "../include/linux/projid.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kprojid_t make_kprojid(struct user_namespace *from, projid_t projid)
{
 return (kprojid_t){ projid };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) projid_t from_kprojid(struct user_namespace *to, kprojid_t kprojid)
{
 return __kprojid_val(kprojid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) projid_t from_kprojid_munged(struct user_namespace *to, kprojid_t kprojid)
{
 projid_t projid = from_kprojid(to, kprojid);
 if (projid == (projid_t)-1)
  projid = 65534;
 return projid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kprojid_has_mapping(struct user_namespace *ns, kprojid_t projid)
{
 return true;
}
# 49 "../include/linux/quota.h" 2
# 1 "../include/uapi/linux/quota.h" 1
# 91 "../include/uapi/linux/quota.h"
enum {
 QIF_BLIMITS_B = 0,
 QIF_SPACE_B,
 QIF_ILIMITS_B,
 QIF_INODES_B,
 QIF_BTIME_B,
 QIF_ITIME_B,
};
# 111 "../include/uapi/linux/quota.h"
struct if_dqblk {
 __u64 dqb_bhardlimit;
 __u64 dqb_bsoftlimit;
 __u64 dqb_curspace;
 __u64 dqb_ihardlimit;
 __u64 dqb_isoftlimit;
 __u64 dqb_curinodes;
 __u64 dqb_btime;
 __u64 dqb_itime;
 __u32 dqb_valid;
};

struct if_nextdqblk {
 __u64 dqb_bhardlimit;
 __u64 dqb_bsoftlimit;
 __u64 dqb_curspace;
 __u64 dqb_ihardlimit;
 __u64 dqb_isoftlimit;
 __u64 dqb_curinodes;
 __u64 dqb_btime;
 __u64 dqb_itime;
 __u32 dqb_valid;
 __u32 dqb_id;
};
# 145 "../include/uapi/linux/quota.h"
enum {
 DQF_ROOT_SQUASH_B = 0,
 DQF_SYS_FILE_B = 16,

 DQF_PRIVATE
};






struct if_dqinfo {
 __u64 dqi_bgrace;
 __u64 dqi_igrace;
 __u32 dqi_flags;
 __u32 dqi_valid;
};
# 179 "../include/uapi/linux/quota.h"
enum {
 QUOTA_NL_C_UNSPEC,
 QUOTA_NL_C_WARNING,
 __QUOTA_NL_C_MAX,
};


enum {
 QUOTA_NL_A_UNSPEC,
 QUOTA_NL_A_QTYPE,
 QUOTA_NL_A_EXCESS_ID,
 QUOTA_NL_A_WARNING,
 QUOTA_NL_A_DEV_MAJOR,
 QUOTA_NL_A_DEV_MINOR,
 QUOTA_NL_A_CAUSED_ID,
 QUOTA_NL_A_PAD,
 __QUOTA_NL_A_MAX,
};
# 50 "../include/linux/quota.h" 2




enum quota_type {
 USRQUOTA = 0,
 GRPQUOTA = 1,
 PRJQUOTA = 2,
};






typedef __kernel_uid32_t qid_t;
typedef long long qsize_t;

struct kqid {
 union {
  kuid_t uid;
  kgid_t gid;
  kprojid_t projid;
 };
 enum quota_type type;
};

extern bool qid_eq(struct kqid left, struct kqid right);
extern bool qid_lt(struct kqid left, struct kqid right);
extern qid_t from_kqid(struct user_namespace *to, struct kqid qid);
extern qid_t from_kqid_munged(struct user_namespace *to, struct kqid qid);
extern bool qid_valid(struct kqid qid);
# 97 "../include/linux/quota.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kqid make_kqid(struct user_namespace *from,
        enum quota_type type, qid_t qid)
{
 struct kqid kqid;

 kqid.type = type;
 switch (type) {
 case USRQUOTA:
  kqid.uid = make_kuid(from, qid);
  break;
 case GRPQUOTA:
  kqid.gid = make_kgid(from, qid);
  break;
 case PRJQUOTA:
  kqid.projid = make_kprojid(from, qid);
  break;
 default:
  do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/quota.h", 114, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 }
 return kqid;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kqid make_kqid_invalid(enum quota_type type)
{
 struct kqid kqid;

 kqid.type = type;
 switch (type) {
 case USRQUOTA:
  kqid.uid = (kuid_t){ -1 };
  break;
 case GRPQUOTA:
  kqid.gid = (kgid_t){ -1 };
  break;
 case PRJQUOTA:
  kqid.projid = (kprojid_t){ -1 };
  break;
 default:
  do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/quota.h", 141, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 }
 return kqid;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kqid make_kqid_uid(kuid_t uid)
{
 struct kqid kqid;
 kqid.type = USRQUOTA;
 kqid.uid = uid;
 return kqid;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kqid make_kqid_gid(kgid_t gid)
{
 struct kqid kqid;
 kqid.type = GRPQUOTA;
 kqid.gid = gid;
 return kqid;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct kqid make_kqid_projid(kprojid_t projid)
{
 struct kqid kqid;
 kqid.type = PRJQUOTA;
 kqid.projid = projid;
 return kqid;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool qid_has_mapping(struct user_namespace *ns, struct kqid qid)
{
 return from_kqid(ns, qid) != (qid_t) -1;
}


extern spinlock_t dq_data_lock;
# 205 "../include/linux/quota.h"
struct mem_dqblk {
 qsize_t dqb_bhardlimit;
 qsize_t dqb_bsoftlimit;
 qsize_t dqb_curspace;
 qsize_t dqb_rsvspace;
 qsize_t dqb_ihardlimit;
 qsize_t dqb_isoftlimit;
 qsize_t dqb_curinodes;
 time64_t dqb_btime;
 time64_t dqb_itime;
};




struct quota_format_type;

struct mem_dqinfo {
 struct quota_format_type *dqi_format;
 int dqi_fmt_id;

 struct list_head dqi_dirty_list;
 unsigned long dqi_flags;
 unsigned int dqi_bgrace;
 unsigned int dqi_igrace;
 qsize_t dqi_max_spc_limit;
 qsize_t dqi_max_ino_limit;
 void *dqi_priv;
};

struct super_block;






enum {
 DQF_INFO_DIRTY_B = DQF_PRIVATE,
};


extern void mark_info_dirty(struct super_block *sb, int type);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int info_dirty(struct mem_dqinfo *info)
{
 return ((__builtin_constant_p(DQF_INFO_DIRTY_B) && __builtin_constant_p((uintptr_t)(&info->dqi_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&info->dqi_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&info->dqi_flags))) ? const_test_bit(DQF_INFO_DIRTY_B, &info->dqi_flags) : arch_test_bit(DQF_INFO_DIRTY_B, &info->dqi_flags));
}

enum {
 DQST_LOOKUPS,
 DQST_DROPS,
 DQST_READS,
 DQST_WRITES,
 DQST_CACHE_HITS,
 DQST_ALLOC_DQUOTS,
 DQST_FREE_DQUOTS,
 DQST_SYNCS,
 _DQST_DQSTAT_LAST
};

struct dqstats {
 unsigned long stat[_DQST_DQSTAT_LAST];
 struct percpu_counter counter[_DQST_DQSTAT_LAST];
};

extern struct dqstats dqstats;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dqstats_inc(unsigned int type)
{
 percpu_counter_inc(&dqstats.counter[type]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dqstats_dec(unsigned int type)
{
 percpu_counter_dec(&dqstats.counter[type]);
}
# 296 "../include/linux/quota.h"
struct dquot {
 struct hlist_node dq_hash;
 struct list_head dq_inuse;
 struct list_head dq_free;
 struct list_head dq_dirty;
 struct mutex dq_lock;
 spinlock_t dq_dqb_lock;
 atomic_t dq_count;
 struct super_block *dq_sb;
 struct kqid dq_id;
 loff_t dq_off;
 unsigned long dq_flags;
 struct mem_dqblk dq_dqb;
};


struct quota_format_ops {
 int (*check_quota_file)(struct super_block *sb, int type);
 int (*read_file_info)(struct super_block *sb, int type);
 int (*write_file_info)(struct super_block *sb, int type);
 int (*free_file_info)(struct super_block *sb, int type);
 int (*read_dqblk)(struct dquot *dquot);
 int (*commit_dqblk)(struct dquot *dquot);
 int (*release_dqblk)(struct dquot *dquot);
 int (*get_next_id)(struct super_block *sb, struct kqid *qid);
};


struct dquot_operations {
 int (*write_dquot) (struct dquot *);
 struct dquot *(*alloc_dquot)(struct super_block *, int);
 void (*destroy_dquot)(struct dquot *);
 int (*acquire_dquot) (struct dquot *);
 int (*release_dquot) (struct dquot *);
 int (*mark_dirty) (struct dquot *);
 int (*write_info) (struct super_block *, int);


 qsize_t *(*get_reserved_space) (struct inode *);
 int (*get_projid) (struct inode *, kprojid_t *);

 int (*get_inode_usage) (struct inode *, qsize_t *);

 int (*get_next_id) (struct super_block *sb, struct kqid *qid);
};

struct path;


struct qc_dqblk {
 int d_fieldmask;
 u64 d_spc_hardlimit;
 u64 d_spc_softlimit;
 u64 d_ino_hardlimit;
 u64 d_ino_softlimit;
 u64 d_space;
 u64 d_ino_count;
 s64 d_ino_timer;

 s64 d_spc_timer;
 int d_ino_warns;
 int d_spc_warns;
 u64 d_rt_spc_hardlimit;
 u64 d_rt_spc_softlimit;
 u64 d_rt_space;
 s64 d_rt_spc_timer;
 int d_rt_spc_warns;
};
# 397 "../include/linux/quota.h"
struct qc_type_state {
 unsigned int flags;
 unsigned int spc_timelimit;

 unsigned int ino_timelimit;
 unsigned int rt_spc_timelimit;
 unsigned int spc_warnlimit;
 unsigned int ino_warnlimit;
 unsigned int rt_spc_warnlimit;
 unsigned long long ino;
 blkcnt_t blocks;
 blkcnt_t nextents;
};

struct qc_state {
 unsigned int s_incoredqs;
 struct qc_type_state s_state[3];
};


struct qc_info {
 int i_fieldmask;
 unsigned int i_flags;
 unsigned int i_spc_timelimit;

 unsigned int i_ino_timelimit;
 unsigned int i_rt_spc_timelimit;
 unsigned int i_spc_warnlimit;
 unsigned int i_ino_warnlimit;
 unsigned int i_rt_spc_warnlimit;
};


struct quotactl_ops {
 int (*quota_on)(struct super_block *, int, int, const struct path *);
 int (*quota_off)(struct super_block *, int);
 int (*quota_enable)(struct super_block *, unsigned int);
 int (*quota_disable)(struct super_block *, unsigned int);
 int (*quota_sync)(struct super_block *, int);
 int (*set_info)(struct super_block *, int, struct qc_info *);
 int (*get_dqblk)(struct super_block *, struct kqid, struct qc_dqblk *);
 int (*get_nextdqblk)(struct super_block *, struct kqid *,
        struct qc_dqblk *);
 int (*set_dqblk)(struct super_block *, struct kqid, struct qc_dqblk *);
 int (*get_state)(struct super_block *, struct qc_state *);
 int (*rm_xquota)(struct super_block *, unsigned int);
};

struct quota_format_type {
 int qf_fmt_id;
 const struct quota_format_ops *qf_ops;
 struct module *qf_owner;
 struct quota_format_type *qf_next;
};
# 466 "../include/linux/quota.h"
enum {
 _DQUOT_USAGE_ENABLED = 0,
 _DQUOT_LIMITS_ENABLED,
 _DQUOT_SUSPENDED,


 _DQUOT_STATE_FLAGS
};
# 493 "../include/linux/quota.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int dquot_state_flag(unsigned int flags, int type)
{
 return flags << type;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int dquot_generic_flag(unsigned int flags, int type)
{
 return (flags >> type) & ((1 << _DQUOT_USAGE_ENABLED * 3) | (1 << _DQUOT_LIMITS_ENABLED * 3) | (1 << _DQUOT_SUSPENDED * 3));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned dquot_state_types(unsigned flags, unsigned flag)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_227(void) __attribute__((__error__("BUILD_BUG_ON failed: " "(flag) == 0 || (((flag) & ((flag) - 1)) != 0)"))); if (!(!((flag) == 0 || (((flag) & ((flag) - 1)) != 0)))) __compiletime_assert_227(); } while (0);
 return (flags / flag) & ((1 << 3) - 1);
}


extern void quota_send_warning(struct kqid qid, dev_t dev,
          const char warntype);
# 521 "../include/linux/quota.h"
struct quota_info {
 unsigned int flags;
 struct rw_semaphore dqio_sem;
 struct inode *files[3];
 struct mem_dqinfo info[3];
 const struct quota_format_ops *ops[3];
};

int register_quota_format(struct quota_format_type *fmt);
void unregister_quota_format(struct quota_format_type *fmt);

struct quota_module_name {
 int qm_fmt_id;
 char *qm_mod_name;
};
# 270 "../include/linux/fs.h" 2
# 303 "../include/linux/fs.h"
enum positive_aop_returns {
 AOP_WRITEPAGE_ACTIVATE = 0x80000,
 AOP_TRUNCATED_PAGE = 0x80001,
};




struct page;
struct address_space;
struct writeback_control;
struct readahead_control;
# 366 "../include/linux/fs.h"
struct kiocb {
 struct file *ki_filp;
 loff_t ki_pos;
 void (*ki_complete)(struct kiocb *iocb, long ret);
 void *private;
 int ki_flags;
 u16 ki_ioprio;
 union {





  struct wait_page_queue *ki_waitq;
# 388 "../include/linux/fs.h"
  ssize_t (*dio_complete)(void *data);
 };
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_sync_kiocb(struct kiocb *kiocb)
{
 return kiocb->ki_complete == ((void *)0);
}

struct address_space_operations {
 int (*writepage)(struct page *page, struct writeback_control *wbc);
 int (*read_folio)(struct file *, struct folio *);


 int (*writepages)(struct address_space *, struct writeback_control *);


 bool (*dirty_folio)(struct address_space *, struct folio *);

 void (*readahead)(struct readahead_control *);

 int (*write_begin)(struct file *, struct address_space *mapping,
    loff_t pos, unsigned len,
    struct page **pagep, void **fsdata);
 int (*write_end)(struct file *, struct address_space *mapping,
    loff_t pos, unsigned len, unsigned copied,
    struct page *page, void *fsdata);


 sector_t (*bmap)(struct address_space *, sector_t);
 void (*invalidate_folio) (struct folio *, size_t offset, size_t len);
 bool (*release_folio)(struct folio *, gfp_t);
 void (*free_folio)(struct folio *folio);
 ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);




 int (*migrate_folio)(struct address_space *, struct folio *dst,
   struct folio *src, enum migrate_mode);
 int (*launder_folio)(struct folio *);
 bool (*is_partially_uptodate) (struct folio *, size_t from,
   size_t count);
 void (*is_dirty_writeback) (struct folio *, bool *dirty, bool *wb);
 int (*error_remove_folio)(struct address_space *, struct folio *);


 int (*swap_activate)(struct swap_info_struct *sis, struct file *file,
    sector_t *span);
 void (*swap_deactivate)(struct file *file);
 int (*swap_rw)(struct kiocb *iocb, struct iov_iter *iter);
};

extern const struct address_space_operations empty_aops;
# 465 "../include/linux/fs.h"
struct address_space {
 struct inode *host;
 struct xarray i_pages;
 struct rw_semaphore invalidate_lock;
 gfp_t gfp_mask;
 atomic_t i_mmap_writable;




 struct rb_root_cached i_mmap;
 unsigned long nrpages;
 unsigned long writeback_index;
 const struct address_space_operations *a_ops;
 unsigned long flags;
 errseq_t wb_err;
 spinlock_t i_private_lock;
 struct list_head i_private_list;
 struct rw_semaphore i_mmap_rwsem;
 void * i_private_data;
} __attribute__((aligned(sizeof(long)))) ;
# 500 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mapping_tagged(struct address_space *mapping, xa_mark_t tag)
{
 return xa_marked(&mapping->i_pages, tag);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_lock_write(struct address_space *mapping)
{
 down_write(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int i_mmap_trylock_write(struct address_space *mapping)
{
 return down_write_trylock(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_unlock_write(struct address_space *mapping)
{
 up_write(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int i_mmap_trylock_read(struct address_space *mapping)
{
 return down_read_trylock(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_lock_read(struct address_space *mapping)
{
 down_read(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_unlock_read(struct address_space *mapping)
{
 up_read(&mapping->i_mmap_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_assert_locked(struct address_space *mapping)
{
 do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(&mapping->i_mmap_rwsem)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 537, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_mmap_assert_write_locked(struct address_space *mapping)
{
 do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held_type(&(&mapping->i_mmap_rwsem)->dep_map, (0)))); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 542, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mapping_mapped(struct address_space *mapping)
{
 return !(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_228(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((&mapping->i_mmap.rb_root)->rb_node) == sizeof(char) || sizeof((&mapping->i_mmap.rb_root)->rb_node) == sizeof(short) || sizeof((&mapping->i_mmap.rb_root)->rb_node) == sizeof(int) || sizeof((&mapping->i_mmap.rb_root)->rb_node) == sizeof(long)) || sizeof((&mapping->i_mmap.rb_root)->rb_node) == sizeof(long long))) __compiletime_assert_228(); } while (0); (*(const volatile typeof( _Generic(((&mapping->i_mmap.rb_root)->rb_node), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((&mapping->i_mmap.rb_root)->rb_node))) *)&((&mapping->i_mmap.rb_root)->rb_node)); }) == ((void *)0));
}
# 562 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mapping_writably_mapped(struct address_space *mapping)
{
 return atomic_read(&mapping->i_mmap_writable) > 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mapping_map_writable(struct address_space *mapping)
{
 return atomic_inc_unless_negative(&mapping->i_mmap_writable) ?
  0 : -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mapping_unmap_writable(struct address_space *mapping)
{
 atomic_dec(&mapping->i_mmap_writable);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mapping_deny_writable(struct address_space *mapping)
{
 return atomic_dec_unless_positive(&mapping->i_mmap_writable) ?
  0 : -16;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mapping_allow_writable(struct address_space *mapping)
{
 atomic_inc(&mapping->i_mmap_writable);
}
# 600 "../include/linux/fs.h"
struct posix_acl;
# 609 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct posix_acl *
uncached_acl_sentinel(struct task_struct *task)
{
 return (void *)task + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
is_uncached_acl(struct posix_acl *acl)
{
 return (long)acl & 1;
}
# 632 "../include/linux/fs.h"
struct inode {
 umode_t i_mode;
 unsigned short i_opflags;
 kuid_t i_uid;
 kgid_t i_gid;
 unsigned int i_flags;


 struct posix_acl *i_acl;
 struct posix_acl *i_default_acl;


 const struct inode_operations *i_op;
 struct super_block *i_sb;
 struct address_space *i_mapping;






 unsigned long i_ino;







 union {
  const unsigned int i_nlink;
  unsigned int __i_nlink;
 };
 dev_t i_rdev;
 loff_t i_size;
 time64_t i_atime_sec;
 time64_t i_mtime_sec;
 time64_t i_ctime_sec;
 u32 i_atime_nsec;
 u32 i_mtime_nsec;
 u32 i_ctime_nsec;
 u32 i_generation;
 spinlock_t i_lock;
 unsigned short i_bytes;
 u8 i_blkbits;
 enum rw_hint i_write_hint;
 blkcnt_t i_blocks;






 unsigned long i_state;
 struct rw_semaphore i_rwsem;

 unsigned long dirtied_when;
 unsigned long dirtied_time_when;

 struct hlist_node i_hash;
 struct list_head i_io_list;
# 701 "../include/linux/fs.h"
 struct list_head i_lru;
 struct list_head i_sb_list;
 struct list_head i_wb_list;
 union {
  struct hlist_head i_dentry;
  struct callback_head i_rcu;
 };
 atomic64_t i_version;
 atomic64_t i_sequence;
 atomic_t i_count;
 atomic_t i_dio_count;
 atomic_t i_writecount;

 atomic_t i_readcount;

 union {
  const struct file_operations *i_fop;
  void (*free_inode)(struct inode *);
 };
 struct file_lock_context *i_flctx;
 struct address_space i_data;
 struct list_head i_devices;
 union {
  struct pipe_inode_info *i_pipe;
  struct cdev *i_cdev;
  char *i_link;
  unsigned i_dir_seq;
 };



 __u32 i_fsnotify_mask;

 struct fsnotify_mark_connector *i_fsnotify_marks;



 struct fscrypt_inode_info *i_crypt_info;






 void *i_private;
} ;

struct timespec64 timestamp_truncate(struct timespec64 t, struct inode *inode);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int i_blocksize(const struct inode *node)
{
 return (1 << node->i_blkbits);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inode_unhashed(struct inode *inode)
{
 return hlist_unhashed(&inode->i_hash);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_fake_hash(struct inode *inode)
{
 hlist_add_fake(&inode->i_hash);
}
# 787 "../include/linux/fs.h"
enum inode_i_mutex_lock_class
{
 I_MUTEX_NORMAL,
 I_MUTEX_PARENT,
 I_MUTEX_CHILD,
 I_MUTEX_XATTR,
 I_MUTEX_NONDIR2,
 I_MUTEX_PARENT2,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_lock(struct inode *inode)
{
 down_write(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_unlock(struct inode *inode)
{
 up_write(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_lock_shared(struct inode *inode)
{
 down_read(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_unlock_shared(struct inode *inode)
{
 up_read(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inode_trylock(struct inode *inode)
{
 return down_write_trylock(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inode_trylock_shared(struct inode *inode)
{
 return down_read_trylock(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inode_is_locked(struct inode *inode)
{
 return rwsem_is_locked(&inode->i_rwsem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_lock_nested(struct inode *inode, unsigned subclass)
{
 down_write_nested(&inode->i_rwsem, subclass);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_lock_shared_nested(struct inode *inode, unsigned subclass)
{
 down_read_nested(&inode->i_rwsem, subclass);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void filemap_invalidate_lock(struct address_space *mapping)
{
 down_write(&mapping->invalidate_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void filemap_invalidate_unlock(struct address_space *mapping)
{
 up_write(&mapping->invalidate_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void filemap_invalidate_lock_shared(struct address_space *mapping)
{
 down_read(&mapping->invalidate_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int filemap_invalidate_trylock_shared(
     struct address_space *mapping)
{
 return down_read_trylock(&mapping->invalidate_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void filemap_invalidate_unlock_shared(
     struct address_space *mapping)
{
 up_read(&mapping->invalidate_lock);
}

void lock_two_nondirectories(struct inode *, struct inode*);
void unlock_two_nondirectories(struct inode *, struct inode*);

void filemap_invalidate_lock_two(struct address_space *mapping1,
     struct address_space *mapping2);
void filemap_invalidate_unlock_two(struct address_space *mapping1,
       struct address_space *mapping2);
# 888 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) loff_t i_size_read(const struct inode *inode)
{
# 908 "../include/linux/fs.h"
 return ({ typeof( _Generic((*&inode->i_size), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&inode->i_size))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_229(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&inode->i_size) == sizeof(char) || sizeof(*&inode->i_size) == sizeof(short) || sizeof(*&inode->i_size) == sizeof(int) || sizeof(*&inode->i_size) == sizeof(long)) || sizeof(*&inode->i_size) == sizeof(long long))) __compiletime_assert_229(); } while (0); (*(const volatile typeof( _Generic((*&inode->i_size), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&inode->i_size))) *)&(*&inode->i_size)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&inode->i_size))___p1; });

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_size_write(struct inode *inode, loff_t i_size)
{
# 935 "../include/linux/fs.h"
 do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_230(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&inode->i_size) == sizeof(char) || sizeof(*&inode->i_size) == sizeof(short) || sizeof(*&inode->i_size) == sizeof(int) || sizeof(*&inode->i_size) == sizeof(long)) || sizeof(*&inode->i_size) == sizeof(long long))) __compiletime_assert_230(); } while (0); do { *(volatile typeof(*&inode->i_size) *)&(*&inode->i_size) = (i_size); } while (0); } while (0); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned iminor(const struct inode *inode)
{
 return ((unsigned int) ((inode->i_rdev) & ((1U << 20) - 1)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned imajor(const struct inode *inode)
{
 return ((unsigned int) ((inode->i_rdev) >> 20));
}

struct fown_struct {
 rwlock_t lock;
 struct pid *pid;
 enum pid_type pid_type;
 kuid_t uid, euid;
 int signum;
};
# 971 "../include/linux/fs.h"
struct file_ra_state {
 unsigned long start;
 unsigned int size;
 unsigned int async_size;
 unsigned int ra_pages;
 unsigned int mmap_miss;
 loff_t prev_pos;
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ra_has_index(struct file_ra_state *ra, unsigned long index)
{
 return (index >= ra->start &&
  index < ra->start + ra->size);
}







struct file {
 union {

  struct callback_head f_task_work;

  struct llist_node f_llist;
  unsigned int f_iocb_flags;
 };





 spinlock_t f_lock;
 fmode_t f_mode;
 atomic_long_t f_count;
 struct mutex f_pos_lock;
 loff_t f_pos;
 unsigned int f_flags;
 struct fown_struct f_owner;
 const struct cred *f_cred;
 struct file_ra_state f_ra;
 struct path f_path;
 struct inode *f_inode;
 const struct file_operations * f_op;

 u64 f_version;




 void *private_data;



 struct hlist_head *f_ep;

 struct address_space *f_mapping;
 errseq_t f_wb_err;
 errseq_t f_sb_err;
}
  __attribute__((aligned(4)));

struct file_handle {
 __u32 handle_bytes;
 int handle_type;

 unsigned char f_handle[] __attribute__((__counted_by__(handle_bytes)));
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct file *get_file(struct file *f)
{
 long prior = atomic_long_fetch_inc_relaxed(&f->f_count);
 ({ bool __ret_do_once = !!(!prior); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 1048, 9, "struct file::f_count incremented from zero; use-after-free condition present!\n"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return f;
}

struct file *get_file_rcu(struct file **f);
struct file *get_file_active(struct file **f);
# 1068 "../include/linux/fs.h"
typedef void *fl_owner_t;

struct file_lock;
struct file_lease;







extern void send_sigio(struct fown_struct *fown, int fd, int band);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *file_inode(const struct file *f)
{
 return f->f_inode;
}
# 1094 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dentry *file_dentry(const struct file *file)
{
 struct dentry *dentry = file->f_path.dentry;

 ({ bool __ret_do_once = !!(d_inode(dentry) != file_inode(file)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 1098, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return dentry;
}

struct fasync_struct {
 rwlock_t fa_lock;
 int magic;
 int fa_fd;
 struct fasync_struct *fa_next;
 struct file *fa_file;
 struct callback_head fa_rcu;
};




extern int fasync_helper(int, struct file *, int, struct fasync_struct **);
extern struct fasync_struct *fasync_insert_entry(int, struct file *, struct fasync_struct **, struct fasync_struct *);
extern int fasync_remove_entry(struct file *, struct fasync_struct **);
extern struct fasync_struct *fasync_alloc(void);
extern void fasync_free(struct fasync_struct *);


extern void kill_fasync(struct fasync_struct **, int, int);

extern void __f_setown(struct file *filp, struct pid *, enum pid_type, int force);
extern int f_setown(struct file *filp, int who, int force);
extern void f_delown(struct file *filp);
extern pid_t f_getown(struct file *filp);
extern int send_sigurg(struct fown_struct *fown);
# 1194 "../include/linux/fs.h"
enum {
 SB_UNFROZEN = 0,
 SB_FREEZE_WRITE = 1,
 SB_FREEZE_PAGEFAULT = 2,
 SB_FREEZE_FS = 3,

 SB_FREEZE_COMPLETE = 4,
};



struct sb_writers {
 unsigned short frozen;
 int freeze_kcount;
 int freeze_ucount;
 struct percpu_rw_semaphore rw_sem[(SB_FREEZE_COMPLETE - 1)];
};

struct super_block {
 struct list_head s_list;
 dev_t s_dev;
 unsigned char s_blocksize_bits;
 unsigned long s_blocksize;
 loff_t s_maxbytes;
 struct file_system_type *s_type;
 const struct super_operations *s_op;
 const struct dquot_operations *dq_op;
 const struct quotactl_ops *s_qcop;
 const struct export_operations *s_export_op;
 unsigned long s_flags;
 unsigned long s_iflags;
 unsigned long s_magic;
 struct dentry *s_root;
 struct rw_semaphore s_umount;
 int s_count;
 atomic_t s_active;



 const struct xattr_handler * const *s_xattr;

 const struct fscrypt_operations *s_cop;
 struct fscrypt_keyring *s_master_keys;





 struct unicode_map *s_encoding;
 __u16 s_encoding_flags;

 struct hlist_bl_head s_roots;
 struct list_head s_mounts;
 struct block_device *s_bdev;
 struct file *s_bdev_file;
 struct backing_dev_info *s_bdi;
 struct mtd_info *s_mtd;
 struct hlist_node s_instances;
 unsigned int s_quota_types;
 struct quota_info s_dquot;

 struct sb_writers s_writers;






 void *s_fs_info;


 u32 s_time_gran;

 time64_t s_time_min;
 time64_t s_time_max;

 __u32 s_fsnotify_mask;
 struct fsnotify_sb_info *s_fsnotify_info;
# 1284 "../include/linux/fs.h"
 char s_id[32];
 uuid_t s_uuid;
 u8 s_uuid_len;


 char s_sysfs_name[36 + 1];

 unsigned int s_max_links;





 struct mutex s_vfs_rename_mutex;





 const char *s_subtype;

 const struct dentry_operations *s_d_op;

 struct shrinker *s_shrink;


 atomic_long_t s_remove_count;


 int s_readonly_remount;


 errseq_t s_wb_err;


 struct workqueue_struct *s_dio_done_wq;
 struct hlist_head s_pins;






 struct user_namespace *s_user_ns;






 struct list_lru s_dentry_lru;
 struct list_lru s_inode_lru;
 struct callback_head rcu;
 struct work_struct destroy_work;

 struct mutex s_sync_lock;




 int s_stack_depth;


 spinlock_t s_inode_list_lock ;
 struct list_head s_inodes;

 spinlock_t s_inode_wblist_lock;
 struct list_head s_inodes_wb;
} ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_namespace *i_user_ns(const struct inode *inode)
{
 return inode->i_sb->s_user_ns;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) uid_t i_uid_read(const struct inode *inode)
{
 return from_kuid(i_user_ns(inode), inode->i_uid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gid_t i_gid_read(const struct inode *inode)
{
 return from_kgid(i_user_ns(inode), inode->i_gid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_uid_write(struct inode *inode, uid_t uid)
{
 inode->i_uid = make_kuid(i_user_ns(inode), uid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_gid_write(struct inode *inode, gid_t gid)
{
 inode->i_gid = make_kgid(i_user_ns(inode), gid);
}
# 1392 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vfsuid_t i_uid_into_vfsuid(struct mnt_idmap *idmap,
      const struct inode *inode)
{
 return make_vfsuid(idmap, i_user_ns(inode), inode->i_uid);
}
# 1409 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool i_uid_needs_update(struct mnt_idmap *idmap,
          const struct iattr *attr,
          const struct inode *inode)
{
 return ((attr->ia_valid & (1 << 1)) &&
  !vfsuid_eq(attr->ia_vfsuid,
      i_uid_into_vfsuid(idmap, inode)));
}
# 1427 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_uid_update(struct mnt_idmap *idmap,
    const struct iattr *attr,
    struct inode *inode)
{
 if (attr->ia_valid & (1 << 1))
  inode->i_uid = from_vfsuid(idmap, i_user_ns(inode),
        attr->ia_vfsuid);
}
# 1444 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vfsgid_t i_gid_into_vfsgid(struct mnt_idmap *idmap,
      const struct inode *inode)
{
 return make_vfsgid(idmap, i_user_ns(inode), inode->i_gid);
}
# 1461 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool i_gid_needs_update(struct mnt_idmap *idmap,
          const struct iattr *attr,
          const struct inode *inode)
{
 return ((attr->ia_valid & (1 << 2)) &&
  !vfsgid_eq(attr->ia_vfsgid,
      i_gid_into_vfsgid(idmap, inode)));
}
# 1479 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_gid_update(struct mnt_idmap *idmap,
    const struct iattr *attr,
    struct inode *inode)
{
 if (attr->ia_valid & (1 << 2))
  inode->i_gid = from_vfsgid(idmap, i_user_ns(inode),
        attr->ia_vfsgid);
}
# 1496 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_fsuid_set(struct inode *inode,
       struct mnt_idmap *idmap)
{
 inode->i_uid = mapped_fsuid(idmap, i_user_ns(inode));
}
# 1510 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_fsgid_set(struct inode *inode,
       struct mnt_idmap *idmap)
{
 inode->i_gid = mapped_fsgid(idmap, i_user_ns(inode));
}
# 1527 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fsuidgid_has_mapping(struct super_block *sb,
     struct mnt_idmap *idmap)
{
 struct user_namespace *fs_userns = sb->s_user_ns;
 kuid_t kuid;
 kgid_t kgid;

 kuid = mapped_fsuid(idmap, fs_userns);
 if (!uid_valid(kuid))
  return false;
 kgid = mapped_fsgid(idmap, fs_userns);
 if (!gid_valid(kgid))
  return false;
 return kuid_has_mapping(fs_userns, kuid) &&
        kgid_has_mapping(fs_userns, kgid);
}

struct timespec64 current_time(struct inode *inode);
struct timespec64 inode_set_ctime_current(struct inode *inode);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) time64_t inode_get_atime_sec(const struct inode *inode)
{
 return inode->i_atime_sec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long inode_get_atime_nsec(const struct inode *inode)
{
 return inode->i_atime_nsec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_get_atime(const struct inode *inode)
{
 struct timespec64 ts = { .tv_sec = inode_get_atime_sec(inode),
     .tv_nsec = inode_get_atime_nsec(inode) };

 return ts;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_atime_to_ts(struct inode *inode,
            struct timespec64 ts)
{
 inode->i_atime_sec = ts.tv_sec;
 inode->i_atime_nsec = ts.tv_nsec;
 return ts;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_atime(struct inode *inode,
      time64_t sec, long nsec)
{
 struct timespec64 ts = { .tv_sec = sec,
     .tv_nsec = nsec };

 return inode_set_atime_to_ts(inode, ts);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) time64_t inode_get_mtime_sec(const struct inode *inode)
{
 return inode->i_mtime_sec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long inode_get_mtime_nsec(const struct inode *inode)
{
 return inode->i_mtime_nsec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_get_mtime(const struct inode *inode)
{
 struct timespec64 ts = { .tv_sec = inode_get_mtime_sec(inode),
     .tv_nsec = inode_get_mtime_nsec(inode) };
 return ts;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_mtime_to_ts(struct inode *inode,
            struct timespec64 ts)
{
 inode->i_mtime_sec = ts.tv_sec;
 inode->i_mtime_nsec = ts.tv_nsec;
 return ts;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_mtime(struct inode *inode,
      time64_t sec, long nsec)
{
 struct timespec64 ts = { .tv_sec = sec,
     .tv_nsec = nsec };
 return inode_set_mtime_to_ts(inode, ts);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) time64_t inode_get_ctime_sec(const struct inode *inode)
{
 return inode->i_ctime_sec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long inode_get_ctime_nsec(const struct inode *inode)
{
 return inode->i_ctime_nsec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_get_ctime(const struct inode *inode)
{
 struct timespec64 ts = { .tv_sec = inode_get_ctime_sec(inode),
     .tv_nsec = inode_get_ctime_nsec(inode) };

 return ts;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_ctime_to_ts(struct inode *inode,
            struct timespec64 ts)
{
 inode->i_ctime_sec = ts.tv_sec;
 inode->i_ctime_nsec = ts.tv_nsec;
 return ts;
}
# 1649 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct timespec64 inode_set_ctime(struct inode *inode,
      time64_t sec, long nsec)
{
 struct timespec64 ts = { .tv_sec = sec,
     .tv_nsec = nsec };

 return inode_set_ctime_to_ts(inode, ts);
}

struct timespec64 simple_inode_init_ts(struct inode *inode);
# 1668 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sb_end_write(struct super_block *sb, int level)
{
 percpu_up_read(sb->s_writers.rw_sem + level-1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sb_start_write(struct super_block *sb, int level)
{
 percpu_down_read(sb->s_writers.rw_sem + level - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __sb_start_write_trylock(struct super_block *sb, int level)
{
 return percpu_down_read_trylock(sb->s_writers.rw_sem + level - 1);
}
# 1697 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __sb_write_started(const struct super_block *sb, int level)
{
 return lock_is_held_type(&(sb->s_writers.rw_sem + level - 1)->dep_map, (1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_write_started(const struct super_block *sb)
{
 return __sb_write_started(sb, SB_FREEZE_WRITE);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_write_not_started(const struct super_block *sb)
{
 return __sb_write_started(sb, SB_FREEZE_WRITE) <= 0;
}
# 1732 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool file_write_started(const struct file *file)
{
 if (!(((file_inode(file)->i_mode) & 00170000) == 0100000))
  return true;
 return sb_write_started(file_inode(file)->i_sb);
}
# 1747 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool file_write_not_started(const struct file *file)
{
 if (!(((file_inode(file)->i_mode) & 00170000) == 0100000))
  return true;
 return sb_write_not_started(file_inode(file)->i_sb);
}
# 1761 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_end_write(struct super_block *sb)
{
 __sb_end_write(sb, SB_FREEZE_WRITE);
}
# 1773 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_end_pagefault(struct super_block *sb)
{
 __sb_end_write(sb, SB_FREEZE_PAGEFAULT);
}
# 1785 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_end_intwrite(struct super_block *sb)
{
 __sb_end_write(sb, SB_FREEZE_FS);
}
# 1809 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_start_write(struct super_block *sb)
{
 __sb_start_write(sb, SB_FREEZE_WRITE);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_start_write_trylock(struct super_block *sb)
{
 return __sb_start_write_trylock(sb, SB_FREEZE_WRITE);
}
# 1838 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_start_pagefault(struct super_block *sb)
{
 __sb_start_write(sb, SB_FREEZE_PAGEFAULT);
}
# 1856 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sb_start_intwrite(struct super_block *sb)
{
 __sb_start_write(sb, SB_FREEZE_FS);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_start_intwrite_trylock(struct super_block *sb)
{
 return __sb_start_write_trylock(sb, SB_FREEZE_FS);
}

bool inode_owner_or_capable(struct mnt_idmap *idmap,
       const struct inode *inode);




int vfs_create(struct mnt_idmap *, struct inode *,
        struct dentry *, umode_t, bool);
int vfs_mkdir(struct mnt_idmap *, struct inode *,
       struct dentry *, umode_t);
int vfs_mknod(struct mnt_idmap *, struct inode *, struct dentry *,
              umode_t, dev_t);
int vfs_symlink(struct mnt_idmap *, struct inode *,
  struct dentry *, const char *);
int vfs_link(struct dentry *, struct mnt_idmap *, struct inode *,
      struct dentry *, struct inode **);
int vfs_rmdir(struct mnt_idmap *, struct inode *, struct dentry *);
int vfs_unlink(struct mnt_idmap *, struct inode *, struct dentry *,
        struct inode **);
# 1897 "../include/linux/fs.h"
struct renamedata {
 struct mnt_idmap *old_mnt_idmap;
 struct inode *old_dir;
 struct dentry *old_dentry;
 struct mnt_idmap *new_mnt_idmap;
 struct inode *new_dir;
 struct dentry *new_dentry;
 struct inode **delegated_inode;
 unsigned int flags;
} ;

int vfs_rename(struct renamedata *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vfs_whiteout(struct mnt_idmap *idmap,
          struct inode *dir, struct dentry *dentry)
{
 return vfs_mknod(idmap, dir, dentry, 0020000 | 0,
    0);
}

struct file *kernel_tmpfile_open(struct mnt_idmap *idmap,
     const struct path *parentpath,
     umode_t mode, int open_flag,
     const struct cred *cred);
struct file *kernel_file_open(const struct path *path, int flags,
         const struct cred *cred);

int vfs_mkobj(struct dentry *, umode_t,
  int (*f)(struct dentry *, umode_t, void *),
  void *);

int vfs_fchown(struct file *file, uid_t user, gid_t group);
int vfs_fchmod(struct file *file, umode_t mode);
int vfs_utimes(const struct path *path, struct timespec64 *times);

extern long vfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
# 1944 "../include/linux/fs.h"
void inode_init_owner(struct mnt_idmap *idmap, struct inode *inode,
        const struct inode *dir, umode_t mode);
extern bool may_open_dev(const struct path *path);
umode_t mode_strip_sgid(struct mnt_idmap *idmap,
   const struct inode *dir, umode_t mode);
bool in_group_or_capable(struct mnt_idmap *idmap,
    const struct inode *inode, vfsgid_t vfsgid);
# 1959 "../include/linux/fs.h"
struct dir_context;
typedef bool (*filldir_t)(struct dir_context *, const char *, int, loff_t, u64,
    unsigned);

struct dir_context {
 filldir_t actor;
 loff_t pos;
};
# 2015 "../include/linux/fs.h"
struct iov_iter;
struct io_uring_cmd;
struct offset_ctx;

typedef unsigned int fop_flags_t;

struct file_operations {
 struct module *owner;
 fop_flags_t fop_flags;
 loff_t (*llseek) (struct file *, loff_t, int);
 ssize_t (*read) (struct file *, char *, size_t, loff_t *);
 ssize_t (*write) (struct file *, const char *, size_t, loff_t *);
 ssize_t (*read_iter) (struct kiocb *, struct iov_iter *);
 ssize_t (*write_iter) (struct kiocb *, struct iov_iter *);
 int (*iopoll)(struct kiocb *kiocb, struct io_comp_batch *,
   unsigned int flags);
 int (*iterate_shared) (struct file *, struct dir_context *);
 __poll_t (*poll) (struct file *, struct poll_table_struct *);
 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
 long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
 int (*mmap) (struct file *, struct vm_area_struct *);
 int (*open) (struct inode *, struct file *);
 int (*flush) (struct file *, fl_owner_t id);
 int (*release) (struct inode *, struct file *);
 int (*fsync) (struct file *, loff_t, loff_t, int datasync);
 int (*fasync) (int, struct file *, int);
 int (*lock) (struct file *, int, struct file_lock *);
 unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
 int (*check_flags)(int);
 int (*flock) (struct file *, int, struct file_lock *);
 ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *, size_t, unsigned int);
 ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int);
 void (*splice_eof)(struct file *file);
 int (*setlease)(struct file *, int, struct file_lease **, void **);
 long (*fallocate)(struct file *file, int mode, loff_t offset,
     loff_t len);
 void (*show_fdinfo)(struct seq_file *m, struct file *f);



 ssize_t (*copy_file_range)(struct file *, loff_t, struct file *,
   loff_t, size_t, unsigned int);
 loff_t (*remap_file_range)(struct file *file_in, loff_t pos_in,
       struct file *file_out, loff_t pos_out,
       loff_t len, unsigned int remap_flags);
 int (*fadvise)(struct file *, loff_t, loff_t, int);
 int (*uring_cmd)(struct io_uring_cmd *ioucmd, unsigned int issue_flags);
 int (*uring_cmd_iopoll)(struct io_uring_cmd *, struct io_comp_batch *,
    unsigned int poll_flags);
} ;
# 2078 "../include/linux/fs.h"
int wrap_directory_iterator(struct file *, struct dir_context *,
       int (*) (struct file *, struct dir_context *));




struct inode_operations {
 struct dentry * (*lookup) (struct inode *,struct dentry *, unsigned int);
 const char * (*get_link) (struct dentry *, struct inode *, struct delayed_call *);
 int (*permission) (struct mnt_idmap *, struct inode *, int);
 struct posix_acl * (*get_inode_acl)(struct inode *, int, bool);

 int (*readlink) (struct dentry *, char *,int);

 int (*create) (struct mnt_idmap *, struct inode *,struct dentry *,
         umode_t, bool);
 int (*link) (struct dentry *,struct inode *,struct dentry *);
 int (*unlink) (struct inode *,struct dentry *);
 int (*symlink) (struct mnt_idmap *, struct inode *,struct dentry *,
   const char *);
 int (*mkdir) (struct mnt_idmap *, struct inode *,struct dentry *,
        umode_t);
 int (*rmdir) (struct inode *,struct dentry *);
 int (*mknod) (struct mnt_idmap *, struct inode *,struct dentry *,
        umode_t,dev_t);
 int (*rename) (struct mnt_idmap *, struct inode *, struct dentry *,
   struct inode *, struct dentry *, unsigned int);
 int (*setattr) (struct mnt_idmap *, struct dentry *, struct iattr *);
 int (*getattr) (struct mnt_idmap *, const struct path *,
   struct kstat *, u32, unsigned int);
 ssize_t (*listxattr) (struct dentry *, char *, size_t);
 int (*fiemap)(struct inode *, struct fiemap_extent_info *, u64 start,
        u64 len);
 int (*update_time)(struct inode *, int);
 int (*atomic_open)(struct inode *, struct dentry *,
      struct file *, unsigned open_flag,
      umode_t create_mode);
 int (*tmpfile) (struct mnt_idmap *, struct inode *,
   struct file *, umode_t);
 struct posix_acl *(*get_acl)(struct mnt_idmap *, struct dentry *,
         int);
 int (*set_acl)(struct mnt_idmap *, struct dentry *,
         struct posix_acl *, int);
 int (*fileattr_set)(struct mnt_idmap *idmap,
       struct dentry *dentry, struct fileattr *fa);
 int (*fileattr_get)(struct dentry *dentry, struct fileattr *fa);
 struct offset_ctx *(*get_offset_ctx)(struct inode *inode);
} __attribute__((__aligned__((1 << (5)))));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int call_mmap(struct file *file, struct vm_area_struct *vma)
{
 return file->f_op->mmap(file, vma);
}

extern ssize_t vfs_read(struct file *, char *, size_t, loff_t *);
extern ssize_t vfs_write(struct file *, const char *, size_t, loff_t *);
extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
       loff_t, size_t, unsigned int);
int remap_verify_area(struct file *file, loff_t pos, loff_t len, bool write);
int __generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
        struct file *file_out, loff_t pos_out,
        loff_t *len, unsigned int remap_flags,
        const struct iomap_ops *dax_read_ops);
int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
      struct file *file_out, loff_t pos_out,
      loff_t *count, unsigned int remap_flags);
extern loff_t vfs_clone_file_range(struct file *file_in, loff_t pos_in,
       struct file *file_out, loff_t pos_out,
       loff_t len, unsigned int remap_flags);
extern int vfs_dedupe_file_range(struct file *file,
     struct file_dedupe_range *same);
extern loff_t vfs_dedupe_file_range_one(struct file *src_file, loff_t src_pos,
     struct file *dst_file, loff_t dst_pos,
     loff_t len, unsigned int remap_flags);
# 2167 "../include/linux/fs.h"
enum freeze_holder {
 FREEZE_HOLDER_KERNEL = (1U << 0),
 FREEZE_HOLDER_USERSPACE = (1U << 1),
 FREEZE_MAY_NEST = (1U << 2),
};

struct super_operations {
    struct inode *(*alloc_inode)(struct super_block *sb);
 void (*destroy_inode)(struct inode *);
 void (*free_inode)(struct inode *);

    void (*dirty_inode) (struct inode *, int flags);
 int (*write_inode) (struct inode *, struct writeback_control *wbc);
 int (*drop_inode) (struct inode *);
 void (*evict_inode) (struct inode *);
 void (*put_super) (struct super_block *);
 int (*sync_fs)(struct super_block *sb, int wait);
 int (*freeze_super) (struct super_block *, enum freeze_holder who);
 int (*freeze_fs) (struct super_block *);
 int (*thaw_super) (struct super_block *, enum freeze_holder who);
 int (*unfreeze_fs) (struct super_block *);
 int (*statfs) (struct dentry *, struct kstatfs *);
 int (*remount_fs) (struct super_block *, int *, char *);
 void (*umount_begin) (struct super_block *);

 int (*show_options)(struct seq_file *, struct dentry *);
 int (*show_devname)(struct seq_file *, struct dentry *);
 int (*show_path)(struct seq_file *, struct dentry *);
 int (*show_stats)(struct seq_file *, struct dentry *);

 ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
 ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
 struct dquot **(*get_dquots)(struct inode *);

 long (*nr_cached_objects)(struct super_block *,
      struct shrink_control *);
 long (*free_cached_objects)(struct super_block *,
        struct shrink_control *);
 void (*shutdown)(struct super_block *sb);
};
# 2249 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_rdonly(const struct super_block *sb) { return sb->s_flags & ((((1UL))) << (0)); }
# 2290 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool HAS_UNMAPPED_ID(struct mnt_idmap *idmap,
       struct inode *inode)
{
 return !vfsuid_valid(i_uid_into_vfsuid(idmap, inode)) ||
        !vfsgid_valid(i_gid_into_vfsgid(idmap, inode));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
{
 *kiocb = (struct kiocb) {
  .ki_filp = filp,
  .ki_flags = filp->f_iocb_flags,
  .ki_ioprio = get_current_ioprio(),
 };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
          struct file *filp)
{
 *kiocb = (struct kiocb) {
  .ki_filp = filp,
  .ki_flags = kiocb_src->ki_flags,
  .ki_ioprio = kiocb_src->ki_ioprio,
  .ki_pos = kiocb_src->ki_pos,
 };
}
# 2423 "../include/linux/fs.h"
extern void __mark_inode_dirty(struct inode *, int);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mark_inode_dirty(struct inode *inode)
{
 __mark_inode_dirty(inode, (((1 << 0) | (1 << 1)) | (1 << 2)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mark_inode_dirty_sync(struct inode *inode)
{
 __mark_inode_dirty(inode, (1 << 0));
}
# 2443 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inode_is_dirtytime_only(struct inode *inode)
{
 return (inode->i_state & ((1 << 11) | (1 << 3) |
      (1 << 5) | (1 << 4))) == (1 << 11);
}

extern void inc_nlink(struct inode *inode);
extern void drop_nlink(struct inode *inode);
extern void clear_nlink(struct inode *inode);
extern void set_nlink(struct inode *inode, unsigned int nlink);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_inc_link_count(struct inode *inode)
{
 inc_nlink(inode);
 mark_inode_dirty(inode);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_dec_link_count(struct inode *inode)
{
 drop_nlink(inode);
 mark_inode_dirty(inode);
}

enum file_time_flags {
 S_ATIME = 1,
 S_MTIME = 2,
 S_CTIME = 4,
 S_VERSION = 8,
};

extern bool atime_needs_update(const struct path *, struct inode *);
extern void touch_atime(const struct path *);
int inode_update_time(struct inode *inode, int flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void file_accessed(struct file *file)
{
 if (!(file->f_flags & 01000000))
  touch_atime(&file->f_path);
}

extern int file_modified(struct file *file);
int kiocb_modified(struct kiocb *iocb);

int sync_inode_metadata(struct inode *inode, int wait);

struct file_system_type {
 const char *name;
 int fs_flags;







 int (*init_fs_context)(struct fs_context *);
 const struct fs_parameter_spec *parameters;
 struct dentry *(*mount) (struct file_system_type *, int,
         const char *, void *);
 void (*kill_sb) (struct super_block *);
 struct module *owner;
 struct file_system_type * next;
 struct hlist_head fs_supers;

 struct lock_class_key s_lock_key;
 struct lock_class_key s_umount_key;
 struct lock_class_key s_vfs_rename_key;
 struct lock_class_key s_writers_key[(SB_FREEZE_COMPLETE - 1)];

 struct lock_class_key i_lock_key;
 struct lock_class_key i_mutex_key;
 struct lock_class_key invalidate_lock_key;
 struct lock_class_key i_mutex_dir_key;
};



extern struct dentry *mount_bdev(struct file_system_type *fs_type,
 int flags, const char *dev_name, void *data,
 int (*fill_super)(struct super_block *, void *, int));
extern struct dentry *mount_single(struct file_system_type *fs_type,
 int flags, void *data,
 int (*fill_super)(struct super_block *, void *, int));
extern struct dentry *mount_nodev(struct file_system_type *fs_type,
 int flags, void *data,
 int (*fill_super)(struct super_block *, void *, int));
extern struct dentry *mount_subtree(struct vfsmount *mnt, const char *path);
void retire_super(struct super_block *sb);
void generic_shutdown_super(struct super_block *sb);
void kill_block_super(struct super_block *sb);
void kill_anon_super(struct super_block *sb);
void kill_litter_super(struct super_block *sb);
void deactivate_super(struct super_block *sb);
void deactivate_locked_super(struct super_block *sb);
int set_anon_super(struct super_block *s, void *data);
int set_anon_super_fc(struct super_block *s, struct fs_context *fc);
int get_anon_bdev(dev_t *);
void free_anon_bdev(dev_t);
struct super_block *sget_fc(struct fs_context *fc,
       int (*test)(struct super_block *, struct fs_context *),
       int (*set)(struct super_block *, struct fs_context *));
struct super_block *sget(struct file_system_type *type,
   int (*test)(struct super_block *,void *),
   int (*set)(struct super_block *,void *),
   int flags, void *data);
struct super_block *sget_dev(struct fs_context *fc, dev_t dev);
# 2567 "../include/linux/fs.h"
extern int register_filesystem(struct file_system_type *);
extern int unregister_filesystem(struct file_system_type *);
extern int vfs_statfs(const struct path *, struct kstatfs *);
extern int user_statfs(const char *, struct kstatfs *);
extern int fd_statfs(int, struct kstatfs *);
int freeze_super(struct super_block *super, enum freeze_holder who);
int thaw_super(struct super_block *super, enum freeze_holder who);
extern __attribute__((__format__(printf, 2, 3)))
int super_setup_bdi_name(struct super_block *sb, char *fmt, ...);
extern int super_setup_bdi(struct super_block *sb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void super_set_uuid(struct super_block *sb, const u8 *uuid, unsigned len)
{
 if (({ int __ret_warn_on = !!(len > sizeof(sb->s_uuid)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 2580, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }))
  len = sizeof(sb->s_uuid);
 sb->s_uuid_len = len;
 memcpy(&sb->s_uuid, uuid, len);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void super_set_sysfs_name_bdev(struct super_block *sb)
{
 snprintf(sb->s_sysfs_name, sizeof(sb->s_sysfs_name), "%pg", sb->s_bdev);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void super_set_sysfs_name_uuid(struct super_block *sb)
{
 ({ int __ret_warn_on = !!(sb->s_uuid_len != sizeof(sb->s_uuid)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/fs.h", 2595, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 snprintf(sb->s_sysfs_name, sizeof(sb->s_sysfs_name), "%pU", sb->s_uuid.b);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void super_set_sysfs_name_id(struct super_block *sb)
{
 sized_strscpy(sb->s_sysfs_name, sb->s_id, sizeof(sb->s_sysfs_name));
}


__attribute__((__format__(printf, 2, 3)))
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void super_set_sysfs_name_generic(struct super_block *sb, const char *fmt, ...)
{
 va_list args;

 __builtin_va_start(args, fmt);
 vsnprintf(sb->s_sysfs_name, sizeof(sb->s_sysfs_name), fmt, args);
 __builtin_va_end(args);
}

extern int current_umask(void);

extern void ihold(struct inode * inode);
extern void iput(struct inode *);
int inode_update_timestamps(struct inode *inode, int flags);
int generic_update_time(struct inode *, int);


extern struct kobject *fs_kobj;




struct audit_names;
struct filename {
 const char *name;
 const char *uptr;
 atomic_t refcnt;
 struct audit_names *aname;
 const char iname[];
};
_Static_assert(__builtin_offsetof(struct filename, iname) % sizeof(long) == 0, "offsetof(struct filename, iname) % sizeof(long) == 0");

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mnt_idmap *file_mnt_idmap(const struct file *file)
{
 return mnt_idmap(file->f_path.mnt);
}
# 2652 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_idmapped_mnt(const struct vfsmount *mnt)
{
 return mnt_idmap(mnt) != &nop_mnt_idmap;
}

extern long vfs_truncate(const struct path *, loff_t);
int do_truncate(struct mnt_idmap *, struct dentry *, loff_t start,
  unsigned int time_attrs, struct file *filp);
extern int vfs_fallocate(struct file *file, int mode, loff_t offset,
   loff_t len);
extern long do_sys_open(int dfd, const char *filename, int flags,
   umode_t mode);
extern struct file *file_open_name(struct filename *, int, umode_t);
extern struct file *filp_open(const char *, int, umode_t);
extern struct file *file_open_root(const struct path *,
       const char *, int, umode_t);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct file *file_open_root_mnt(struct vfsmount *mnt,
       const char *name, int flags, umode_t mode)
{
 return file_open_root(&(struct path){.mnt = mnt, .dentry = mnt->mnt_root},
         name, flags, mode);
}
struct file *dentry_open(const struct path *path, int flags,
    const struct cred *creds);
struct file *dentry_create(const struct path *path, int flags, umode_t mode,
      const struct cred *cred);
struct path *backing_file_user_path(struct file *f);
# 2690 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct path *file_user_path(struct file *f)
{
 if (__builtin_expect(!!(f->f_mode & (( fmode_t)(1 << 25))), 0))
  return backing_file_user_path(f);
 return &f->f_path;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct inode *file_user_inode(struct file *f)
{
 if (__builtin_expect(!!(f->f_mode & (( fmode_t)(1 << 25))), 0))
  return d_inode(backing_file_user_path(f)->dentry);
 return file_inode(f);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct file *file_clone_open(struct file *file)
{
 return dentry_open(&file->f_path, file->f_flags, file->f_cred);
}
extern int filp_close(struct file *, fl_owner_t id);

extern struct filename *getname_flags(const char *, int);
extern struct filename *getname_uflags(const char *, int);
extern struct filename *getname(const char *);
extern struct filename *getname_kernel(const char *);
extern void putname(struct filename *name);

extern int finish_open(struct file *file, struct dentry *dentry,
   int (*open)(struct inode *, struct file *));
extern int finish_no_open(struct file *file, struct dentry *dentry);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int finish_open_simple(struct file *file, int error)
{
 if (error)
  return error;

 return finish_open(file, file->f_path.dentry, ((void *)0));
}


extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) vfs_caches_init_early(void);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) vfs_caches_init(void);

extern struct kmem_cache *names_cachep;




extern struct super_block *blockdev_superblock;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_is_blkdev_sb(struct super_block *sb)
{
 return 0 && sb == blockdev_superblock;
}

void emergency_thaw_all(void);
extern int sync_filesystem(struct super_block *);
extern const struct file_operations def_blk_fops;
extern const struct file_operations def_chr_fops;
# 2757 "../include/linux/fs.h"
extern int alloc_chrdev_region(dev_t *, unsigned, unsigned, const char *);
extern int register_chrdev_region(dev_t, unsigned, const char *);
extern int __register_chrdev(unsigned int major, unsigned int baseminor,
        unsigned int count, const char *name,
        const struct file_operations *fops);
extern void __unregister_chrdev(unsigned int major, unsigned int baseminor,
    unsigned int count, const char *name);
extern void unregister_chrdev_region(dev_t, unsigned);
extern void chrdev_show(struct seq_file *,off_t);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_chrdev(unsigned int major, const char *name,
      const struct file_operations *fops)
{
 return __register_chrdev(major, 0, 256, name, fops);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unregister_chrdev(unsigned int major, const char *name)
{
 __unregister_chrdev(major, 0, 256, name);
}

extern void init_special_inode(struct inode *, umode_t, dev_t);


extern void make_bad_inode(struct inode *);
extern bool is_bad_inode(struct inode *);

extern int __attribute__((__warn_unused_result__)) file_fdatawait_range(struct file *file, loff_t lstart,
      loff_t lend);
extern int __attribute__((__warn_unused_result__)) file_check_and_advance_wb_err(struct file *file);
extern int __attribute__((__warn_unused_result__)) file_write_and_wait_range(struct file *file,
      loff_t start, loff_t end);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int file_write_and_wait(struct file *file)
{
 return file_write_and_wait_range(file, 0, ((long long)(~0ULL >> 1)));
}

extern int vfs_fsync_range(struct file *file, loff_t start, loff_t end,
      int datasync);
extern int vfs_fsync(struct file *file, int datasync);

extern int sync_file_range(struct file *file, loff_t offset, loff_t nbytes,
    unsigned int flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool iocb_is_dsync(const struct kiocb *iocb)
{
 return (iocb->ki_flags & ( int) (( __kernel_rwf_t)0x00000002)) ||
  (((iocb->ki_filp->f_mapping->host)->i_sb->s_flags & (((((1UL))) << (4)))) || ((iocb->ki_filp->f_mapping->host)->i_flags & (1 << 0)));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ssize_t generic_write_sync(struct kiocb *iocb, ssize_t count)
{
 if (iocb_is_dsync(iocb)) {
  int ret = vfs_fsync_range(iocb->ki_filp,
    iocb->ki_pos - count, iocb->ki_pos - 1,
    (iocb->ki_flags & ( int) (( __kernel_rwf_t)0x00000004)) ? 0 : 1);
  if (ret)
   return ret;
 }

 return count;
}

extern void emergency_sync(void);
extern void emergency_remount(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bmap(struct inode *inode, sector_t *block)
{
 return -22;
}


int notify_change(struct mnt_idmap *, struct dentry *,
    struct iattr *, struct inode **);
int inode_permission(struct mnt_idmap *, struct inode *, int);
int generic_permission(struct mnt_idmap *, struct inode *, int);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int file_permission(struct file *file, int mask)
{
 return inode_permission(file_mnt_idmap(file),
    file_inode(file), mask);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int path_permission(const struct path *path, int mask)
{
 return inode_permission(mnt_idmap(path->mnt),
    d_inode(path->dentry), mask);
}
int __check_sticky(struct mnt_idmap *idmap, struct inode *dir,
     struct inode *inode);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool execute_ok(struct inode *inode)
{
 return (inode->i_mode & (00100|00010|00001)) || (((inode->i_mode) & 00170000) == 0040000);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inode_wrong_type(const struct inode *inode, umode_t mode)
{
 return (inode->i_mode ^ mode) & 00170000;
}
# 2872 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void file_start_write(struct file *file)
{
 if (!(((file_inode(file)->i_mode) & 00170000) == 0100000))
  return;
 sb_start_write(file_inode(file)->i_sb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool file_start_write_trylock(struct file *file)
{
 if (!(((file_inode(file)->i_mode) & 00170000) == 0100000))
  return true;
 return sb_start_write_trylock(file_inode(file)->i_sb);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void file_end_write(struct file *file)
{
 if (!(((file_inode(file)->i_mode) & 00170000) == 0100000))
  return;
 sb_end_write(file_inode(file)->i_sb);
}
# 2906 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kiocb_start_write(struct kiocb *iocb)
{
 struct inode *inode = file_inode(iocb->ki_filp);

 sb_start_write(inode->i_sb);




 percpu_rwsem_release(&(inode->i_sb)->s_writers.rw_sem[(SB_FREEZE_WRITE)-1], 1, ({ __label__ __here; __here: (unsigned long)&&__here; }));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kiocb_end_write(struct kiocb *iocb)
{
 struct inode *inode = file_inode(iocb->ki_filp);




 percpu_rwsem_acquire(&(inode->i_sb)->s_writers.rw_sem[(SB_FREEZE_WRITE)-1], 1, ({ __label__ __here; __here: (unsigned long)&&__here; }));
 sb_end_write(inode->i_sb);
}
# 2956 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int get_write_access(struct inode *inode)
{
 return atomic_inc_unless_negative(&inode->i_writecount) ? 0 : -26;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int deny_write_access(struct file *file)
{
 struct inode *inode = file_inode(file);
 return atomic_dec_unless_positive(&inode->i_writecount) ? 0 : -26;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_write_access(struct inode * inode)
{
 atomic_dec(&inode->i_writecount);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void allow_write_access(struct file *file)
{
 if (file)
  atomic_inc(&file_inode(file)->i_writecount);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inode_is_open_for_write(const struct inode *inode)
{
 return atomic_read(&inode->i_writecount) > 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_readcount_dec(struct inode *inode)
{
 do { if (__builtin_expect(!!(atomic_dec_return(&inode->i_readcount) < 0), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/fs.h", 2982, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void i_readcount_inc(struct inode *inode)
{
 atomic_inc(&inode->i_readcount);
}
# 2998 "../include/linux/fs.h"
extern int do_pipe_flags(int *, int);

extern ssize_t kernel_read(struct file *, void *, size_t, loff_t *);
ssize_t __kernel_read(struct file *file, void *buf, size_t count, loff_t *pos);
extern ssize_t kernel_write(struct file *, const void *, size_t, loff_t *);
extern ssize_t __kernel_write(struct file *, const void *, size_t, loff_t *);
extern struct file * open_exec(const char *);


extern bool is_subdir(struct dentry *, struct dentry *);
extern bool path_is_under(const struct path *, const struct path *);

extern char *file_path(struct file *, char *, int);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_dot_dotdot(const char *name, size_t len)
{
 return len && __builtin_expect(!!(name[0] == '.'), 0) &&
  (len == 1 || (len == 2 && name[1] == '.'));
}




extern loff_t default_llseek(struct file *file, loff_t offset, int whence);

extern loff_t vfs_llseek(struct file *file, loff_t offset, int whence);

extern int inode_init_always(struct super_block *, struct inode *);
extern void inode_init_once(struct inode *);
extern void address_space_init_once(struct address_space *mapping);
extern struct inode * igrab(struct inode *);
extern ino_t iunique(struct super_block *, ino_t);
extern int inode_needs_sync(struct inode *inode);
extern int generic_delete_inode(struct inode *inode);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int generic_drop_inode(struct inode *inode)
{
 return !inode->i_nlink || inode_unhashed(inode);
}
extern void d_mark_dontcache(struct inode *inode);

extern struct inode *ilookup5_nowait(struct super_block *sb,
  unsigned long hashval, int (*test)(struct inode *, void *),
  void *data);
extern struct inode *ilookup5(struct super_block *sb, unsigned long hashval,
  int (*test)(struct inode *, void *), void *data);
extern struct inode *ilookup(struct super_block *sb, unsigned long ino);

extern struct inode *inode_insert5(struct inode *inode, unsigned long hashval,
  int (*test)(struct inode *, void *),
  int (*set)(struct inode *, void *),
  void *data);
struct inode *iget5_locked(struct super_block *, unsigned long,
      int (*test)(struct inode *, void *),
      int (*set)(struct inode *, void *), void *);
struct inode *iget5_locked_rcu(struct super_block *, unsigned long,
          int (*test)(struct inode *, void *),
          int (*set)(struct inode *, void *), void *);
extern struct inode * iget_locked(struct super_block *, unsigned long);
extern struct inode *find_inode_nowait(struct super_block *,
           unsigned long,
           int (*match)(struct inode *,
          unsigned long, void *),
           void *data);
extern struct inode *find_inode_rcu(struct super_block *, unsigned long,
        int (*)(struct inode *, void *), void *);
extern struct inode *find_inode_by_ino_rcu(struct super_block *, unsigned long);
extern int insert_inode_locked4(struct inode *, unsigned long, int (*test)(struct inode *, void *), void *);
extern int insert_inode_locked(struct inode *);

extern void lockdep_annotate_inode_mutex_key(struct inode *inode);



extern void unlock_new_inode(struct inode *);
extern void discard_new_inode(struct inode *);
extern unsigned int get_next_ino(void);
extern void evict_inodes(struct super_block *sb);
void dump_mapping(const struct address_space *);
# 3092 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_zero_ino(ino_t ino)
{
 return (u32)ino == 0;
}

extern void __iget(struct inode * inode);
extern void iget_failed(struct inode *);
extern void clear_inode(struct inode *);
extern void __destroy_inode(struct inode *);
extern struct inode *new_inode_pseudo(struct super_block *sb);
extern struct inode *new_inode(struct super_block *sb);
extern void free_inode_nonrcu(struct inode *inode);
extern int setattr_should_drop_suidgid(struct mnt_idmap *, struct inode *);
extern int file_remove_privs_flags(struct file *file, unsigned int flags);
extern int file_remove_privs(struct file *);
int setattr_should_drop_sgid(struct mnt_idmap *idmap,
        const struct inode *inode);







extern void __insert_inode_hash(struct inode *, unsigned long hashval);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insert_inode_hash(struct inode *inode)
{
 __insert_inode_hash(inode, inode->i_ino);
}

extern void __remove_inode_hash(struct inode *);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void remove_inode_hash(struct inode *inode)
{
 if (!inode_unhashed(inode) && !hlist_fake(&inode->i_hash))
  __remove_inode_hash(inode);
}

extern void inode_sb_list_add(struct inode *inode);
extern void inode_add_lru(struct inode *inode);

extern int sb_set_blocksize(struct super_block *, int);
extern int sb_min_blocksize(struct super_block *, int);

extern int generic_file_mmap(struct file *, struct vm_area_struct *);
extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
extern ssize_t generic_write_checks(struct kiocb *, struct iov_iter *);
int generic_write_checks_count(struct kiocb *iocb, loff_t *count);
extern int generic_write_check_limits(struct file *file, loff_t pos,
  loff_t *count);
extern int generic_file_rw_checks(struct file *file_in, struct file *file_out);
ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *to,
  ssize_t already_read);
extern ssize_t generic_file_read_iter(struct kiocb *, struct iov_iter *);
extern ssize_t __generic_file_write_iter(struct kiocb *, struct iov_iter *);
extern ssize_t generic_file_write_iter(struct kiocb *, struct iov_iter *);
extern ssize_t generic_file_direct_write(struct kiocb *, struct iov_iter *);
ssize_t generic_perform_write(struct kiocb *, struct iov_iter *);
ssize_t direct_write_fallback(struct kiocb *iocb, struct iov_iter *iter,
  ssize_t direct_written, ssize_t buffered_written);

ssize_t vfs_iter_read(struct file *file, struct iov_iter *iter, loff_t *ppos,
  rwf_t flags);
ssize_t vfs_iter_write(struct file *file, struct iov_iter *iter, loff_t *ppos,
  rwf_t flags);
ssize_t vfs_iocb_iter_read(struct file *file, struct kiocb *iocb,
      struct iov_iter *iter);
ssize_t vfs_iocb_iter_write(struct file *file, struct kiocb *iocb,
       struct iov_iter *iter);


ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
       struct pipe_inode_info *pipe,
       size_t len, unsigned int flags);
ssize_t copy_splice_read(struct file *in, loff_t *ppos,
    struct pipe_inode_info *pipe,
    size_t len, unsigned int flags);
extern ssize_t iter_file_splice_write(struct pipe_inode_info *,
  struct file *, loff_t *, size_t, unsigned int);


extern void
file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping);
extern loff_t noop_llseek(struct file *file, loff_t offset, int whence);

extern loff_t vfs_setpos(struct file *file, loff_t offset, loff_t maxsize);
extern loff_t generic_file_llseek(struct file *file, loff_t offset, int whence);
extern loff_t generic_file_llseek_size(struct file *file, loff_t offset,
  int whence, loff_t maxsize, loff_t eof);
extern loff_t fixed_size_llseek(struct file *file, loff_t offset,
  int whence, loff_t size);
extern loff_t no_seek_end_llseek_size(struct file *, loff_t, int, loff_t);
extern loff_t no_seek_end_llseek(struct file *, loff_t, int);
int rw_verify_area(int, struct file *, const loff_t *, size_t);
extern int generic_file_open(struct inode * inode, struct file * filp);
extern int nonseekable_open(struct inode * inode, struct file * filp);
extern int stream_open(struct inode * inode, struct file * filp);
# 3217 "../include/linux/fs.h"
void inode_dio_wait(struct inode *inode);
# 3226 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_dio_begin(struct inode *inode)
{
 atomic_inc(&inode->i_dio_count);
}
# 3238 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_dio_end(struct inode *inode)
{
 if (atomic_dec_and_test(&inode->i_dio_count))
  wake_up_bit(&inode->i_state, 9);
}

extern void inode_set_flags(struct inode *inode, unsigned int flags,
       unsigned int mask);

extern const struct file_operations generic_ro_fops;



extern int readlink_copy(char *, int, const char *);
extern int page_readlink(struct dentry *, char *, int);
extern const char *page_get_link(struct dentry *, struct inode *,
     struct delayed_call *);
extern void page_put_link(void *);
extern int page_symlink(struct inode *inode, const char *symname, int len);
extern const struct inode_operations page_symlink_inode_operations;
extern void kfree_link(void *);
void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
void generic_fill_statx_atomic_writes(struct kstat *stat,
          unsigned int unit_min,
          unsigned int unit_max);
extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
void __inode_add_bytes(struct inode *inode, loff_t bytes);
void inode_add_bytes(struct inode *inode, loff_t bytes);
void __inode_sub_bytes(struct inode *inode, loff_t bytes);
void inode_sub_bytes(struct inode *inode, loff_t bytes);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) loff_t __inode_get_bytes(struct inode *inode)
{
 return (((loff_t)inode->i_blocks) << 9) + inode->i_bytes;
}
loff_t inode_get_bytes(struct inode *inode);
void inode_set_bytes(struct inode *inode, loff_t bytes);
const char *simple_get_link(struct dentry *, struct inode *,
       struct delayed_call *);
extern const struct inode_operations simple_symlink_inode_operations;

extern int iterate_dir(struct file *, struct dir_context *);

int vfs_fstatat(int dfd, const char *filename, struct kstat *stat,
  int flags);
int vfs_fstat(int fd, struct kstat *stat);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vfs_stat(const char *filename, struct kstat *stat)
{
 return vfs_fstatat(-100, filename, stat, 0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vfs_lstat(const char *name, struct kstat *stat)
{
 return vfs_fstatat(-100, name, stat, 0x100);
}

extern const char *vfs_get_link(struct dentry *, struct delayed_call *);
extern int vfs_readlink(struct dentry *, char *, int);

extern struct file_system_type *get_filesystem(struct file_system_type *fs);
extern void put_filesystem(struct file_system_type *fs);
extern struct file_system_type *get_fs_type(const char *name);
extern void drop_super(struct super_block *sb);
extern void drop_super_exclusive(struct super_block *sb);
extern void iterate_supers(void (*)(struct super_block *, void *), void *);
extern void iterate_supers_type(struct file_system_type *,
           void (*)(struct super_block *, void *), void *);

extern int dcache_dir_open(struct inode *, struct file *);
extern int dcache_dir_close(struct inode *, struct file *);
extern loff_t dcache_dir_lseek(struct file *, loff_t, int);
extern int dcache_readdir(struct file *, struct dir_context *);
extern int simple_setattr(struct mnt_idmap *, struct dentry *,
     struct iattr *);
extern int simple_getattr(struct mnt_idmap *, const struct path *,
     struct kstat *, u32, unsigned int);
extern int simple_statfs(struct dentry *, struct kstatfs *);
extern int simple_open(struct inode *inode, struct file *file);
extern int simple_link(struct dentry *, struct inode *, struct dentry *);
extern int simple_unlink(struct inode *, struct dentry *);
extern int simple_rmdir(struct inode *, struct dentry *);
void simple_rename_timestamp(struct inode *old_dir, struct dentry *old_dentry,
        struct inode *new_dir, struct dentry *new_dentry);
extern int simple_rename_exchange(struct inode *old_dir, struct dentry *old_dentry,
      struct inode *new_dir, struct dentry *new_dentry);
extern int simple_rename(struct mnt_idmap *, struct inode *,
    struct dentry *, struct inode *, struct dentry *,
    unsigned int);
extern void simple_recursive_removal(struct dentry *,
                              void (*callback)(struct dentry *));
extern int noop_fsync(struct file *, loff_t, loff_t, int);
extern ssize_t noop_direct_IO(struct kiocb *iocb, struct iov_iter *iter);
extern int simple_empty(struct dentry *);
extern int simple_write_begin(struct file *file, struct address_space *mapping,
   loff_t pos, unsigned len,
   struct page **pagep, void **fsdata);
extern const struct address_space_operations ram_aops;
extern int always_delete_dentry(const struct dentry *);
extern struct inode *alloc_anon_inode(struct super_block *);
extern int simple_nosetlease(struct file *, int, struct file_lease **, void **);
extern const struct dentry_operations simple_dentry_operations;

extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned int flags);
extern ssize_t generic_read_dir(struct file *, char *, size_t, loff_t *);
extern const struct file_operations simple_dir_operations;
extern const struct inode_operations simple_dir_inode_operations;
extern void make_empty_dir_inode(struct inode *inode);
extern bool is_empty_dir_inode(struct inode *inode);
struct tree_descr { const char *name; const struct file_operations *ops; int mode; };
struct dentry *d_alloc_name(struct dentry *, const char *);
extern int simple_fill_super(struct super_block *, unsigned long,
        const struct tree_descr *);
extern int simple_pin_fs(struct file_system_type *, struct vfsmount **mount, int *count);
extern void simple_release_fs(struct vfsmount **mount, int *count);

extern ssize_t simple_read_from_buffer(void *to, size_t count,
   loff_t *ppos, const void *from, size_t available);
extern ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
  const void *from, size_t count);

struct offset_ctx {
 struct maple_tree mt;
 unsigned long next_offset;
};

void simple_offset_init(struct offset_ctx *octx);
int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry);
void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry);
int simple_offset_empty(struct dentry *dentry);
int simple_offset_rename(struct inode *old_dir, struct dentry *old_dentry,
    struct inode *new_dir, struct dentry *new_dentry);
int simple_offset_rename_exchange(struct inode *old_dir,
      struct dentry *old_dentry,
      struct inode *new_dir,
      struct dentry *new_dentry);
void simple_offset_destroy(struct offset_ctx *octx);

extern const struct file_operations simple_offset_dir_operations;

extern int __generic_file_fsync(struct file *, loff_t, loff_t, int);
extern int generic_file_fsync(struct file *, loff_t, loff_t, int);

extern int generic_check_addressable(unsigned, u64);

extern void generic_set_sb_d_ops(struct super_block *sb);
extern int generic_ci_match(const struct inode *parent,
       const struct qstr *name,
       const struct qstr *folded_name,
       const u8 *de_name, u32 de_name_len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sb_has_encoding(const struct super_block *sb)
{

 return !!sb->s_encoding;



}

int may_setattr(struct mnt_idmap *idmap, struct inode *inode,
  unsigned int ia_valid);
int setattr_prepare(struct mnt_idmap *, struct dentry *, struct iattr *);
extern int inode_newsize_ok(const struct inode *, loff_t offset);
void setattr_copy(struct mnt_idmap *, struct inode *inode,
    const struct iattr *attr);

extern int file_update_time(struct file *file);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_dax(const struct vm_area_struct *vma)
{
 return vma->vm_file && ((vma->vm_file->f_mapping->host)->i_flags & 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_fsdax(struct vm_area_struct *vma)
{
 struct inode *inode;

 if (!0 || !vma->vm_file)
  return false;
 if (!vma_is_dax(vma))
  return false;
 inode = file_inode(vma->vm_file);
 if ((((inode->i_mode) & 00170000) == 0020000))
  return false;
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int iocb_flags(struct file *file)
{
 int res = 0;
 if (file->f_flags & 00002000)
  res |= ( int) (( __kernel_rwf_t)0x00000010);
 if (file->f_flags & 00040000)
  res |= (1 << 17);
 if (file->f_flags & 00010000)
  res |= ( int) (( __kernel_rwf_t)0x00000002);
 if (file->f_flags & 04000000)
  res |= ( int) (( __kernel_rwf_t)0x00000004);
 return res;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags,
         int rw_type)
{
 int kiocb_flags = 0;


 do { __attribute__((__noreturn__)) extern void __compiletime_assert_231(void) __attribute__((__error__("BUILD_BUG_ON failed: " "(__force int) RWF_SUPPORTED & IOCB_EVENTFD"))); if (!(!(( int) ((( __kernel_rwf_t)0x00000001) | (( __kernel_rwf_t)0x00000002) | (( __kernel_rwf_t)0x00000004) | (( __kernel_rwf_t)0x00000008) | (( __kernel_rwf_t)0x00000010) | (( __kernel_rwf_t)0x00000020) | (( __kernel_rwf_t)0x00000040)) & (1 << 16)))) __compiletime_assert_231(); } while (0);

 if (!flags)
  return 0;
 if (__builtin_expect(!!(flags & ~((( __kernel_rwf_t)0x00000001) | (( __kernel_rwf_t)0x00000002) | (( __kernel_rwf_t)0x00000004) | (( __kernel_rwf_t)0x00000008) | (( __kernel_rwf_t)0x00000010) | (( __kernel_rwf_t)0x00000020) | (( __kernel_rwf_t)0x00000040))), 0))
  return -95;
 if (__builtin_expect(!!((flags & (( __kernel_rwf_t)0x00000010)) && (flags & (( __kernel_rwf_t)0x00000020))), 0))
  return -22;

 if (flags & (( __kernel_rwf_t)0x00000008)) {
  if (!(ki->ki_filp->f_mode & (( fmode_t)(1 << 27))))
   return -95;
  kiocb_flags |= (1 << 20);
 }
 if (flags & (( __kernel_rwf_t)0x00000040)) {
  if (rw_type != 1)
   return -95;
  if (!(ki->ki_filp->f_mode & (( fmode_t)(1 << 7))))
   return -95;
 }
 kiocb_flags |= ( int) (flags & ((( __kernel_rwf_t)0x00000001) | (( __kernel_rwf_t)0x00000002) | (( __kernel_rwf_t)0x00000004) | (( __kernel_rwf_t)0x00000008) | (( __kernel_rwf_t)0x00000010) | (( __kernel_rwf_t)0x00000020) | (( __kernel_rwf_t)0x00000040)));
 if (flags & (( __kernel_rwf_t)0x00000004))
  kiocb_flags |= ( int) (( __kernel_rwf_t)0x00000002);

 if ((flags & (( __kernel_rwf_t)0x00000020)) && (ki->ki_flags & ( int) (( __kernel_rwf_t)0x00000010))) {
  if (((file_inode(ki->ki_filp))->i_flags & (1 << 2)))
   return -1;
  ki->ki_flags &= ~( int) (( __kernel_rwf_t)0x00000010);
 }

 ki->ki_flags |= kiocb_flags;
 return 0;
}







struct simple_transaction_argresp {
 ssize_t size;
 char data[];
};



char *simple_transaction_get(struct file *file, const char *buf,
    size_t size);
ssize_t simple_transaction_read(struct file *file, char *buf,
    size_t size, loff_t *pos);
int simple_transaction_release(struct inode *inode, struct file *file);

void simple_transaction_set(struct file *file, size_t n);
# 3538 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__format__(printf, 1, 2)))
void __simple_attr_check_format(const char *fmt, ...)
{

}

int simple_attr_open(struct inode *inode, struct file *file,
       int (*get)(void *, u64 *), int (*set)(void *, u64),
       const char *fmt);
int simple_attr_release(struct inode *inode, struct file *file);
ssize_t simple_attr_read(struct file *file, char *buf,
    size_t len, loff_t *ppos);
ssize_t simple_attr_write(struct file *file, const char *buf,
     size_t len, loff_t *ppos);
ssize_t simple_attr_write_signed(struct file *file, const char *buf,
     size_t len, loff_t *ppos);

struct ctl_table;
int __attribute__((__section__(".init.text"))) __attribute__((__cold__)) list_bdev_fs_names(char *buf, size_t size);
# 3565 "../include/linux/fs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_sxid(umode_t mode)
{
 return mode & (0004000 | 0002000);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int check_sticky(struct mnt_idmap *idmap,
          struct inode *dir, struct inode *inode)
{
 if (!(dir->i_mode & 0001000))
  return 0;

 return __check_sticky(idmap, dir, inode);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_has_no_xattr(struct inode *inode)
{
 if (!is_sxid(inode->i_mode) && (inode->i_sb->s_flags & ((((1UL))) << (28))))
  inode->i_flags |= (1 << 12);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_root_inode(struct inode *inode)
{
 return inode == inode->i_sb->s_root->d_inode;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_emit(struct dir_context *ctx,
       const char *name, int namelen,
       u64 ino, unsigned type)
{
 return ctx->actor(ctx, name, namelen, ctx->pos, ino, type);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_emit_dot(struct file *file, struct dir_context *ctx)
{
 return ctx->actor(ctx, ".", 1, ctx->pos,
     file->f_path.dentry->d_inode->i_ino, 4);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_emit_dotdot(struct file *file, struct dir_context *ctx)
{
 return ctx->actor(ctx, "..", 2, ctx->pos,
     d_parent_ino(file->f_path.dentry), 4);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_emit_dots(struct file *file, struct dir_context *ctx)
{
 if (ctx->pos == 0) {
  if (!dir_emit_dot(file, ctx))
   return false;
  ctx->pos = 1;
 }
 if (ctx->pos == 1) {
  if (!dir_emit_dotdot(file, ctx))
   return false;
  ctx->pos = 2;
 }
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_relax(struct inode *inode)
{
 inode_unlock(inode);
 inode_lock(inode);
 return !((inode)->i_flags & (1 << 4));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dir_relax_shared(struct inode *inode)
{
 inode_unlock_shared(inode);
 inode_lock_shared(inode);
 return !((inode)->i_flags & (1 << 4));
}

extern bool path_noexec(const struct path *path);
extern void inode_nohighmem(struct inode *inode);


extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
         int advice);
extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
      int advice);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vfs_empty_path(int dfd, const char *path)
{
 char c;

 if (dfd < 0)
  return false;


 if (!path)
  return true;

 if (__builtin_expect(!!(({ const void *__p = (path); __might_fault("include/linux/fs.h", 3654); __builtin_expect(!!(__access_ok(__p, sizeof(*path))), 1) ? ({ int __gu_err = -14; (void)0; switch (sizeof(*((__typeof__(*(path)) *)__p))) { case 1: { unsigned char __x = 0; __gu_err = __get_user_fn(sizeof (*((__typeof__(*(path)) *)__p)), (__typeof__(*(path)) *)__p, &__x); ((c)) = *( __typeof__(*((__typeof__(*(path)) *)__p)) *) &__x; break; }; case 2: { unsigned short __x = 0; __gu_err = __get_user_fn(sizeof (*((__typeof__(*(path)) *)__p)), (__typeof__(*(path)) *)__p, &__x); ((c)) = *( __typeof__(*((__typeof__(*(path)) *)__p)) *) &__x; break; }; case 4: { unsigned int __x = 0; __gu_err = __get_user_fn(sizeof (*((__typeof__(*(path)) *)__p)), (__typeof__(*(path)) *)__p, &__x); ((c)) = *( __typeof__(*((__typeof__(*(path)) *)__p)) *) &__x; break; }; case 8: { unsigned long long __x = 0; __gu_err = __get_user_fn(sizeof (*((__typeof__(*(path)) *)__p)), (__typeof__(*(path)) *)__p, &__x); ((c)) = *( __typeof__(*((__typeof__(*(path)) *)__p)) *) &__x; break; }; default: __get_user_bad(); break; } __gu_err; }) : ((c) = (__typeof__(*(path)))0,-14); })), 0))
  return false;

 return !c;
}

bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
# 18 "../include/linux/compat.h" 2
# 1 "../include/uapi/linux/aio_abi.h" 1
# 34 "../include/uapi/linux/aio_abi.h"
typedef __kernel_ulong_t aio_context_t;

enum {
 IOCB_CMD_PREAD = 0,
 IOCB_CMD_PWRITE = 1,
 IOCB_CMD_FSYNC = 2,
 IOCB_CMD_FDSYNC = 3,

 IOCB_CMD_POLL = 5,
 IOCB_CMD_NOOP = 6,
 IOCB_CMD_PREADV = 7,
 IOCB_CMD_PWRITEV = 8,
};
# 60 "../include/uapi/linux/aio_abi.h"
struct io_event {
 __u64 data;
 __u64 obj;
 __s64 res;
 __s64 res2;
};







struct iocb {

 __u64 aio_data;


 __u32 aio_key;
 __kernel_rwf_t aio_rw_flags;
# 88 "../include/uapi/linux/aio_abi.h"
 __u16 aio_lio_opcode;
 __s16 aio_reqprio;
 __u32 aio_fildes;

 __u64 aio_buf;
 __u64 aio_nbytes;
 __s64 aio_offset;


 __u64 aio_reserved2;


 __u32 aio_flags;





 __u32 aio_resfd;
};
# 19 "../include/linux/compat.h" 2

# 1 "../include/uapi/linux/unistd.h" 1







# 1 "../arch/hexagon/include/asm/unistd.h" 1
# 10 "../arch/hexagon/include/asm/unistd.h"
# 1 "../arch/hexagon/include/uapi/asm/unistd.h" 1
# 30 "../arch/hexagon/include/uapi/asm/unistd.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/unistd_32.h" 1
# 31 "../arch/hexagon/include/uapi/asm/unistd.h" 2
# 11 "../arch/hexagon/include/asm/unistd.h" 2
# 9 "../include/uapi/linux/unistd.h" 2
# 21 "../include/linux/compat.h" 2

# 1 "./arch/hexagon/include/generated/asm/compat.h" 1
# 1 "../include/asm-generic/compat.h" 1
# 30 "../include/asm-generic/compat.h"
typedef u32 compat_size_t;
typedef s32 compat_ssize_t;
typedef s32 compat_clock_t;
typedef s32 compat_pid_t;
typedef u32 compat_ino_t;
typedef s32 compat_off_t;
typedef s64 compat_loff_t;
typedef s32 compat_daddr_t;
typedef s32 compat_timer_t;
typedef s32 compat_key_t;
typedef s16 compat_short_t;
typedef s32 compat_int_t;
typedef s32 compat_long_t;
typedef u16 compat_ushort_t;
typedef u32 compat_uint_t;
typedef u32 compat_ulong_t;
typedef u32 compat_uptr_t;
typedef u32 compat_caddr_t;
typedef u32 compat_aio_context_t;
typedef u32 compat_old_sigset_t;


typedef u32 __compat_uid_t;
typedef u32 __compat_gid_t;



typedef u32 __compat_uid32_t;
typedef u32 __compat_gid32_t;



typedef u32 compat_mode_t;






typedef s64 compat_s64;
typedef u64 compat_u64;



typedef u32 compat_sigset_word;





typedef u32 compat_dev_t;



typedef s32 compat_ipc_pid_t;



typedef __kernel_fsid_t compat_fsid_t;



struct compat_statfs {
 compat_int_t f_type;
 compat_int_t f_bsize;
 compat_int_t f_blocks;
 compat_int_t f_bfree;
 compat_int_t f_bavail;
 compat_int_t f_files;
 compat_int_t f_ffree;
 compat_fsid_t f_fsid;
 compat_int_t f_namelen;
 compat_int_t f_frsize;
 compat_int_t f_flags;
 compat_int_t f_spare[4];
};



struct compat_ipc64_perm {
 compat_key_t key;
 __compat_uid32_t uid;
 __compat_gid32_t gid;
 __compat_uid32_t cuid;
 __compat_gid32_t cgid;
 compat_mode_t mode;
 unsigned char __pad1[4 - sizeof(compat_mode_t)];
 compat_ushort_t seq;
 compat_ushort_t __pad2;
 compat_ulong_t unused1;
 compat_ulong_t unused2;
};

struct compat_semid64_ds {
 struct compat_ipc64_perm sem_perm;
 compat_ulong_t sem_otime;
 compat_ulong_t sem_otime_high;
 compat_ulong_t sem_ctime;
 compat_ulong_t sem_ctime_high;
 compat_ulong_t sem_nsems;
 compat_ulong_t __unused3;
 compat_ulong_t __unused4;
};

struct compat_msqid64_ds {
 struct compat_ipc64_perm msg_perm;
 compat_ulong_t msg_stime;
 compat_ulong_t msg_stime_high;
 compat_ulong_t msg_rtime;
 compat_ulong_t msg_rtime_high;
 compat_ulong_t msg_ctime;
 compat_ulong_t msg_ctime_high;
 compat_ulong_t msg_cbytes;
 compat_ulong_t msg_qnum;
 compat_ulong_t msg_qbytes;
 compat_pid_t msg_lspid;
 compat_pid_t msg_lrpid;
 compat_ulong_t __unused4;
 compat_ulong_t __unused5;
};

struct compat_shmid64_ds {
 struct compat_ipc64_perm shm_perm;
 compat_size_t shm_segsz;
 compat_ulong_t shm_atime;
 compat_ulong_t shm_atime_high;
 compat_ulong_t shm_dtime;
 compat_ulong_t shm_dtime_high;
 compat_ulong_t shm_ctime;
 compat_ulong_t shm_ctime_high;
 compat_pid_t shm_cpid;
 compat_pid_t shm_lpid;
 compat_ulong_t shm_nattch;
 compat_ulong_t __unused4;
 compat_ulong_t __unused5;
};
# 2 "./arch/hexagon/include/generated/asm/compat.h" 2
# 23 "../include/linux/compat.h" 2
# 1 "./arch/hexagon/include/generated/uapi/asm/siginfo.h" 1
# 24 "../include/linux/compat.h" 2
# 90 "../include/linux/compat.h"
struct compat_iovec {
 compat_uptr_t iov_base;
 compat_size_t iov_len;
};





typedef struct compat_sigaltstack {
 compat_uptr_t ss_sp;
 int ss_flags;
 compat_size_t ss_size;
} compat_stack_t;
# 112 "../include/linux/compat.h"
typedef __compat_uid32_t compat_uid_t;
typedef __compat_gid32_t compat_gid_t;

struct compat_sel_arg_struct;
struct rusage;

struct old_itimerval32;

struct compat_tms {
 compat_clock_t tms_utime;
 compat_clock_t tms_stime;
 compat_clock_t tms_cutime;
 compat_clock_t tms_cstime;
};



typedef struct {
 compat_sigset_word sig[(64 / 32)];
} compat_sigset_t;

int set_compat_user_sigmask(const compat_sigset_t *umask,
       size_t sigsetsize);

struct compat_sigaction {

 compat_uptr_t sa_handler;
 compat_ulong_t sa_flags;







 compat_sigset_t sa_mask __attribute__((__packed__));
};

typedef union compat_sigval {
 compat_int_t sival_int;
 compat_uptr_t sival_ptr;
} compat_sigval_t;

typedef struct compat_siginfo {
 int si_signo;

 int si_errno;
 int si_code;





 union {
  int _pad[128/sizeof(int) - 3];


  struct {
   compat_pid_t _pid;
   __compat_uid32_t _uid;
  } _kill;


  struct {
   compat_timer_t _tid;
   int _overrun;
   compat_sigval_t _sigval;
  } _timer;


  struct {
   compat_pid_t _pid;
   __compat_uid32_t _uid;
   compat_sigval_t _sigval;
  } _rt;


  struct {
   compat_pid_t _pid;
   __compat_uid32_t _uid;
   int _status;
   compat_clock_t _utime;
   compat_clock_t _stime;
  } _sigchld;
# 209 "../include/linux/compat.h"
  struct {
   compat_uptr_t _addr;


   union {

    int _trapno;




    short int _addr_lsb;

    struct {
     char _dummy_bnd[(__alignof__(compat_uptr_t) < sizeof(short) ? sizeof(short) : __alignof__(compat_uptr_t))];
     compat_uptr_t _lower;
     compat_uptr_t _upper;
    } _addr_bnd;

    struct {
     char _dummy_pkey[(__alignof__(compat_uptr_t) < sizeof(short) ? sizeof(short) : __alignof__(compat_uptr_t))];
     u32 _pkey;
    } _addr_pkey;

    struct {
     compat_ulong_t _data;
     u32 _type;
     u32 _flags;
    } _perf;
   };
  } _sigfault;


  struct {
   compat_long_t _band;
   int _fd;
  } _sigpoll;

  struct {
   compat_uptr_t _call_addr;
   int _syscall;
   unsigned int _arch;
  } _sigsys;
 } _sifields;
} compat_siginfo_t;

struct compat_rlimit {
 compat_ulong_t rlim_cur;
 compat_ulong_t rlim_max;
};







struct compat_flock {
 short l_type;
 short l_whence;
 compat_off_t l_start;
 compat_off_t l_len;



 compat_pid_t l_pid;



};

struct compat_flock64 {
 short l_type;
 short l_whence;
 compat_loff_t l_start;
 compat_loff_t l_len;
 compat_pid_t l_pid;



} ;

struct compat_rusage {
 struct old_timeval32 ru_utime;
 struct old_timeval32 ru_stime;
 compat_long_t ru_maxrss;
 compat_long_t ru_ixrss;
 compat_long_t ru_idrss;
 compat_long_t ru_isrss;
 compat_long_t ru_minflt;
 compat_long_t ru_majflt;
 compat_long_t ru_nswap;
 compat_long_t ru_inblock;
 compat_long_t ru_oublock;
 compat_long_t ru_msgsnd;
 compat_long_t ru_msgrcv;
 compat_long_t ru_nsignals;
 compat_long_t ru_nvcsw;
 compat_long_t ru_nivcsw;
};

extern int put_compat_rusage(const struct rusage *,
        struct compat_rusage *);

struct compat_siginfo;
struct __compat_aio_sigset;

struct compat_dirent {
 u32 d_ino;
 compat_off_t d_off;
 u16 d_reclen;
 char d_name[256];
};

struct compat_ustat {
 compat_daddr_t f_tfree;
 compat_ino_t f_tinode;
 char f_fname[6];
 char f_fpack[6];
};



typedef struct compat_sigevent {
 compat_sigval_t sigev_value;
 compat_int_t sigev_signo;
 compat_int_t sigev_notify;
 union {
  compat_int_t _pad[((64/sizeof(int)) - 3)];
  compat_int_t _tid;

  struct {
   compat_uptr_t _function;
   compat_uptr_t _attribute;
  } _sigev_thread;
 } _sigev_un;
} compat_sigevent_t;

struct compat_ifmap {
 compat_ulong_t mem_start;
 compat_ulong_t mem_end;
 unsigned short base_addr;
 unsigned char irq;
 unsigned char dma;
 unsigned char port;
};

struct compat_if_settings {
 unsigned int type;
 unsigned int size;
 compat_uptr_t ifs_ifsu;
};

struct compat_ifreq {
 union {
  char ifrn_name[16];
 } ifr_ifrn;
 union {
  struct sockaddr ifru_addr;
  struct sockaddr ifru_dstaddr;
  struct sockaddr ifru_broadaddr;
  struct sockaddr ifru_netmask;
  struct sockaddr ifru_hwaddr;
  short ifru_flags;
  compat_int_t ifru_ivalue;
  compat_int_t ifru_mtu;
  struct compat_ifmap ifru_map;
  char ifru_slave[16];
  char ifru_newname[16];
  compat_caddr_t ifru_data;
  struct compat_if_settings ifru_settings;
 } ifr_ifru;
};

struct compat_ifconf {
 compat_int_t ifc_len;
 compat_caddr_t ifcbuf;
};

struct compat_robust_list {
 compat_uptr_t next;
};

struct compat_robust_list_head {
 struct compat_robust_list list;
 compat_long_t futex_offset;
 compat_uptr_t list_op_pending;
};
# 407 "../include/linux/compat.h"
struct compat_keyctl_kdf_params {
 compat_uptr_t hashname;
 compat_uptr_t otherinfo;
 __u32 otherinfolen;
 __u32 __spare[8];
};

struct compat_stat;
struct compat_statfs;
struct compat_statfs64;
struct compat_old_linux_dirent;
struct compat_linux_dirent;
struct linux_dirent64;
struct compat_msghdr;
struct compat_mmsghdr;
struct compat_sysinfo;
struct compat_sysctl_args;
struct compat_kexec_segment;
struct compat_mq_attr;
struct compat_msgbuf;

void copy_siginfo_to_external32(struct compat_siginfo *to,
  const struct kernel_siginfo *from);
int copy_siginfo_from_user32(kernel_siginfo_t *to,
  const struct compat_siginfo *from);
int __copy_siginfo_to_user32(struct compat_siginfo *to,
  const kernel_siginfo_t *from);



int get_compat_sigevent(struct sigevent *event,
  const struct compat_sigevent *u_event);

extern int get_compat_sigset(sigset_t *set, const compat_sigset_t *compat);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
put_compat_sigset(compat_sigset_t *compat, const sigset_t *set,
    unsigned int size)
{
# 464 "../include/linux/compat.h"
 return copy_to_user(compat, set, size) ? -14 : 0;

}
# 535 "../include/linux/compat.h"
extern int compat_ptrace_request(struct task_struct *child,
     compat_long_t request,
     compat_ulong_t addr, compat_ulong_t data);

extern long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
          compat_ulong_t addr, compat_ulong_t data);

struct epoll_event;

int compat_restore_altstack(const compat_stack_t *uss);
int __compat_save_altstack(compat_stack_t *, unsigned long);
# 569 "../include/linux/compat.h"
           long compat_sys_io_setup(unsigned nr_reqs, u32 *ctx32p);
           long compat_sys_io_submit(compat_aio_context_t ctx_id, int nr,
         u32 *iocb);
           long compat_sys_io_pgetevents(compat_aio_context_t ctx_id,
     compat_long_t min_nr,
     compat_long_t nr,
     struct io_event *events,
     struct old_timespec32 *timeout,
     const struct __compat_aio_sigset *usig);
           long compat_sys_io_pgetevents_time64(compat_aio_context_t ctx_id,
     compat_long_t min_nr,
     compat_long_t nr,
     struct io_event *events,
     struct __kernel_timespec *timeout,
     const struct __compat_aio_sigset *usig);
           long compat_sys_epoll_pwait(int epfd,
   struct epoll_event *events,
   int maxevents, int timeout,
   const compat_sigset_t *sigmask,
   compat_size_t sigsetsize);
           long compat_sys_epoll_pwait2(int epfd,
   struct epoll_event *events,
   int maxevents,
   const struct __kernel_timespec *timeout,
   const compat_sigset_t *sigmask,
   compat_size_t sigsetsize);
           long compat_sys_fcntl(unsigned int fd, unsigned int cmd,
     compat_ulong_t arg);
           long compat_sys_fcntl64(unsigned int fd, unsigned int cmd,
       compat_ulong_t arg);
           long compat_sys_ioctl(unsigned int fd, unsigned int cmd,
     compat_ulong_t arg);
           long compat_sys_statfs(const char *pathname,
      struct compat_statfs *buf);
           long compat_sys_statfs64(const char *pathname,
        compat_size_t sz,
        struct compat_statfs64 *buf);
           long compat_sys_fstatfs(unsigned int fd,
       struct compat_statfs *buf);
           long compat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
         struct compat_statfs64 *buf);
           long compat_sys_truncate(const char *, compat_off_t);
           long compat_sys_ftruncate(unsigned int, compat_off_t);

           long compat_sys_openat(int dfd, const char *filename,
      int flags, umode_t mode);
           long compat_sys_getdents(unsigned int fd,
        struct compat_linux_dirent *dirent,
        unsigned int count);
           long compat_sys_lseek(unsigned int, compat_off_t, unsigned int);

           ssize_t compat_sys_preadv(compat_ulong_t fd,
  const struct iovec *vec,
  compat_ulong_t vlen, u32 pos_low, u32 pos_high);
           ssize_t compat_sys_pwritev(compat_ulong_t fd,
  const struct iovec *vec,
  compat_ulong_t vlen, u32 pos_low, u32 pos_high);
# 637 "../include/linux/compat.h"
           long compat_sys_sendfile(int out_fd, int in_fd,
        compat_off_t *offset, compat_size_t count);
           long compat_sys_sendfile64(int out_fd, int in_fd,
        compat_loff_t *offset, compat_size_t count);
           long compat_sys_pselect6_time32(int n, compat_ulong_t *inp,
        compat_ulong_t *outp,
        compat_ulong_t *exp,
        struct old_timespec32 *tsp,
        void *sig);
           long compat_sys_pselect6_time64(int n, compat_ulong_t *inp,
        compat_ulong_t *outp,
        compat_ulong_t *exp,
        struct __kernel_timespec *tsp,
        void *sig);
           long compat_sys_ppoll_time32(struct pollfd *ufds,
     unsigned int nfds,
     struct old_timespec32 *tsp,
     const compat_sigset_t *sigmask,
     compat_size_t sigsetsize);
           long compat_sys_ppoll_time64(struct pollfd *ufds,
     unsigned int nfds,
     struct __kernel_timespec *tsp,
     const compat_sigset_t *sigmask,
     compat_size_t sigsetsize);
           long compat_sys_signalfd4(int ufd,
         const compat_sigset_t *sigmask,
         compat_size_t sigsetsize, int flags);
           long compat_sys_newfstatat(unsigned int dfd,
          const char *filename,
          struct compat_stat *statbuf,
          int flag);
           long compat_sys_newfstat(unsigned int fd,
        struct compat_stat *statbuf);

           long compat_sys_waitid(int, compat_pid_t,
  struct compat_siginfo *, int,
  struct compat_rusage *);
           long
compat_sys_set_robust_list(struct compat_robust_list_head *head,
      compat_size_t len);
           long
compat_sys_get_robust_list(int pid, compat_uptr_t *head_ptr,
      compat_size_t *len_ptr);
           long compat_sys_getitimer(int which,
         struct old_itimerval32 *it);
           long compat_sys_setitimer(int which,
         struct old_itimerval32 *in,
         struct old_itimerval32 *out);
           long compat_sys_kexec_load(compat_ulong_t entry,
          compat_ulong_t nr_segments,
          struct compat_kexec_segment *,
          compat_ulong_t flags);
           long compat_sys_timer_create(clockid_t which_clock,
   struct compat_sigevent *timer_event_spec,
   timer_t *created_timer_id);
           long compat_sys_ptrace(compat_long_t request, compat_long_t pid,
      compat_long_t addr, compat_long_t data);
           long compat_sys_sched_setaffinity(compat_pid_t pid,
         unsigned int len,
         compat_ulong_t *user_mask_ptr);
           long compat_sys_sched_getaffinity(compat_pid_t pid,
         unsigned int len,
         compat_ulong_t *user_mask_ptr);
           long compat_sys_sigaltstack(const compat_stack_t *uss_ptr,
           compat_stack_t *uoss_ptr);
           long compat_sys_rt_sigsuspend(compat_sigset_t *unewset,
      compat_size_t sigsetsize);

           long compat_sys_rt_sigaction(int,
     const struct compat_sigaction *,
     struct compat_sigaction *,
     compat_size_t);

           long compat_sys_rt_sigprocmask(int how, compat_sigset_t *set,
       compat_sigset_t *oset,
       compat_size_t sigsetsize);
           long compat_sys_rt_sigpending(compat_sigset_t *uset,
      compat_size_t sigsetsize);
           long compat_sys_rt_sigtimedwait_time32(compat_sigset_t *uthese,
  struct compat_siginfo *uinfo,
  struct old_timespec32 *uts, compat_size_t sigsetsize);
           long compat_sys_rt_sigtimedwait_time64(compat_sigset_t *uthese,
  struct compat_siginfo *uinfo,
  struct __kernel_timespec *uts, compat_size_t sigsetsize);
           long compat_sys_rt_sigqueueinfo(compat_pid_t pid, int sig,
    struct compat_siginfo *uinfo);

           long compat_sys_times(struct compat_tms *tbuf);
           long compat_sys_getrlimit(unsigned int resource,
         struct compat_rlimit *rlim);
           long compat_sys_setrlimit(unsigned int resource,
         struct compat_rlimit *rlim);
           long compat_sys_getrusage(int who, struct compat_rusage *ru);
           long compat_sys_gettimeofday(struct old_timeval32 *tv,
  struct timezone *tz);
           long compat_sys_settimeofday(struct old_timeval32 *tv,
  struct timezone *tz);
           long compat_sys_sysinfo(struct compat_sysinfo *info);
           long compat_sys_mq_open(const char *u_name,
   int oflag, compat_mode_t mode,
   struct compat_mq_attr *u_attr);
           long compat_sys_mq_notify(mqd_t mqdes,
   const struct compat_sigevent *u_notification);
           long compat_sys_mq_getsetattr(mqd_t mqdes,
   const struct compat_mq_attr *u_mqstat,
   struct compat_mq_attr *u_omqstat);
           long compat_sys_msgctl(int first, int second, void *uptr);
           long compat_sys_msgrcv(int msqid, compat_uptr_t msgp,
  compat_ssize_t msgsz, compat_long_t msgtyp, int msgflg);
           long compat_sys_msgsnd(int msqid, compat_uptr_t msgp,
  compat_ssize_t msgsz, int msgflg);
           long compat_sys_semctl(int semid, int semnum, int cmd, int arg);
           long compat_sys_shmctl(int first, int second, void *uptr);
           long compat_sys_shmat(int shmid, compat_uptr_t shmaddr, int shmflg);
           long compat_sys_recvfrom(int fd, void *buf, compat_size_t len,
       unsigned flags, struct sockaddr *addr,
       int *addrlen);
           long compat_sys_sendmsg(int fd, struct compat_msghdr *msg,
       unsigned flags);
           long compat_sys_recvmsg(int fd, struct compat_msghdr *msg,
       unsigned int flags);

           long compat_sys_keyctl(u32 option,
         u32 arg2, u32 arg3, u32 arg4, u32 arg5);
           long compat_sys_execve(const char *filename, const compat_uptr_t *argv,
       const compat_uptr_t *envp);


           long compat_sys_rt_tgsigqueueinfo(compat_pid_t tgid,
     compat_pid_t pid, int sig,
     struct compat_siginfo *uinfo);
           long compat_sys_recvmmsg_time64(int fd, struct compat_mmsghdr *mmsg,
        unsigned vlen, unsigned int flags,
        struct __kernel_timespec *timeout);
           long compat_sys_recvmmsg_time32(int fd, struct compat_mmsghdr *mmsg,
        unsigned vlen, unsigned int flags,
        struct old_timespec32 *timeout);
           long compat_sys_wait4(compat_pid_t pid,
     compat_uint_t *stat_addr, int options,
     struct compat_rusage *ru);
           long compat_sys_fanotify_mark(int, unsigned int, __u32, __u32,
         int, const char *);
           long compat_sys_open_by_handle_at(int mountdirfd,
          struct file_handle *handle,
          int flags);
           long compat_sys_sendmmsg(int fd, struct compat_mmsghdr *mmsg,
        unsigned vlen, unsigned int flags);
           long compat_sys_execveat(int dfd, const char *filename,
       const compat_uptr_t *argv,
       const compat_uptr_t *envp, int flags);
           ssize_t compat_sys_preadv2(compat_ulong_t fd,
  const struct iovec *vec,
  compat_ulong_t vlen, u32 pos_low, u32 pos_high, rwf_t flags);
           ssize_t compat_sys_pwritev2(compat_ulong_t fd,
  const struct iovec *vec,
  compat_ulong_t vlen, u32 pos_low, u32 pos_high, rwf_t flags);
# 812 "../include/linux/compat.h"
           long compat_sys_open(const char *filename, int flags,
    umode_t mode);


           long compat_sys_signalfd(int ufd,
        const compat_sigset_t *sigmask,
        compat_size_t sigsetsize);


           long compat_sys_newstat(const char *filename,
       struct compat_stat *statbuf);
           long compat_sys_newlstat(const char *filename,
        struct compat_stat *statbuf);


           long compat_sys_select(int n, compat_ulong_t *inp,
  compat_ulong_t *outp, compat_ulong_t *exp,
  struct old_timeval32 *tvp);
           long compat_sys_ustat(unsigned dev, struct compat_ustat *u32);
           long compat_sys_recv(int fd, void *buf, compat_size_t len,
    unsigned flags);


           long compat_sys_old_readdir(unsigned int fd,
           struct compat_old_linux_dirent *,
           unsigned int count);


           long compat_sys_old_select(struct compat_sel_arg_struct *arg);


           long compat_sys_ipc(u32, int, int, u32, compat_uptr_t, u32);
# 861 "../include/linux/compat.h"
           long compat_sys_socketcall(int call, u32 *args);
# 908 "../include/linux/compat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct old_timeval32 ns_to_old_timeval32(s64 nsec)
{
 struct __kernel_old_timeval tv;
 struct old_timeval32 ctv;

 tv = ns_to_kernel_old_timeval(nsec);
 ctv.tv_sec = tv.tv_sec;
 ctv.tv_usec = tv.tv_usec;

 return ctv;
}







int kcompat_sys_statfs64(const char * pathname, compat_size_t sz,
       struct compat_statfs64 * buf);
int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
     struct compat_statfs64 * buf);
# 947 "../include/linux/compat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool in_compat_syscall(void) { return false; }







long compat_get_bitmap(unsigned long *mask, const compat_ulong_t *umask,
         unsigned long bitmap_size);
long compat_put_bitmap(compat_ulong_t *umask, unsigned long *mask,
         unsigned long bitmap_size);
# 976 "../include/linux/compat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *compat_ptr(compat_uptr_t uptr)
{
 return (void *)(unsigned long)uptr;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) compat_uptr_t ptr_to_compat(void *uptr)
{
 return (u32)(unsigned long)uptr;
}
# 18 "../include/linux/ethtool.h" 2
# 1 "../include/linux/if_ether.h" 1
# 19 "../include/linux/if_ether.h"
# 1 "../include/linux/skbuff.h" 1
# 17 "../include/linux/skbuff.h"
# 1 "../include/linux/bvec.h" 1
# 10 "../include/linux/bvec.h"
# 1 "../include/linux/highmem.h" 1







# 1 "../include/linux/cacheflush.h" 1




# 1 "../arch/hexagon/include/asm/cacheflush.h" 1
# 31 "../arch/hexagon/include/asm/cacheflush.h"
extern void flush_dcache_range(unsigned long start, unsigned long end);





extern void flush_icache_range(unsigned long start, unsigned long end);
# 50 "../arch/hexagon/include/asm/cacheflush.h"
extern void flush_cache_all_hexagon(void);
# 61 "../arch/hexagon/include/asm/cacheflush.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_mmu_cache_range(struct vm_fault *vmf,
  struct vm_area_struct *vma, unsigned long address,
  pte_t *ptep, unsigned int nr)
{

}




void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
         unsigned long vaddr, void *dst, void *src, int len);





extern void hexagon_inv_dcache_range(unsigned long start, unsigned long end);
extern void hexagon_clean_dcache_range(unsigned long start, unsigned long end);

# 1 "../include/asm-generic/cacheflush.h" 1






struct mm_struct;
struct vm_area_struct;
struct page;
struct address_space;






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_all(void)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_mm(struct mm_struct *mm)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_dup_mm(struct mm_struct *mm)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_range(struct vm_area_struct *vma,
         unsigned long start,
         unsigned long end)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_page(struct vm_area_struct *vma,
        unsigned long vmaddr,
        unsigned long pfn)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_dcache_page(struct page *page)
{
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_dcache_mmap_lock(struct address_space *mapping)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_dcache_mmap_unlock(struct address_space *mapping)
{
}
# 81 "../include/asm-generic/cacheflush.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_icache_user_page(struct vm_area_struct *vma,
        struct page *page,
        unsigned long addr, int len)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_vmap(unsigned long start, unsigned long end)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_vmap_early(unsigned long start, unsigned long end)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_cache_vunmap(unsigned long start, unsigned long end)
{
}
# 82 "../arch/hexagon/include/asm/cacheflush.h" 2
# 6 "../include/linux/cacheflush.h" 2

struct folio;






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_dcache_folio(struct folio *folio)
{
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_icache_pages(struct vm_area_struct *vma,
         struct page *page, unsigned int nr)
{
}
# 9 "../include/linux/highmem.h" 2
# 1 "../include/linux/kmsan.h" 1
# 12 "../include/linux/kmsan.h"
# 1 "../include/linux/dma-direction.h" 1




enum dma_data_direction {
 DMA_BIDIRECTIONAL = 0,
 DMA_TO_DEVICE = 1,
 DMA_FROM_DEVICE = 2,
 DMA_NONE = 3,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int valid_dma_direction(enum dma_data_direction dir)
{
 return dir == DMA_BIDIRECTIONAL || dir == DMA_TO_DEVICE ||
  dir == DMA_FROM_DEVICE;
}
# 13 "../include/linux/kmsan.h" 2




struct page;
struct kmem_cache;
struct task_struct;
struct scatterlist;
struct urb;
# 296 "../include/linux/kmsan.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_init_shadow(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_init_runtime(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __attribute__((__warn_unused_result__)) kmsan_memblock_free_pages(struct page *page,
         unsigned int order)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_task_create(struct task_struct *task)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_task_exit(struct task_struct *task)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_alloc_page(struct page *page, unsigned int order,
        gfp_t flags)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_free_page(struct page *page, unsigned int order)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_copy_page_meta(struct page *dst, struct page *src)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_slab_alloc(struct kmem_cache *s, void *object,
        gfp_t flags)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_slab_free(struct kmem_cache *s, void *object)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_kmalloc_large(const void *ptr, size_t size,
           gfp_t flags)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_kfree_large(const void *ptr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kmsan_vmap_pages_range_noflush(
 unsigned long start, unsigned long end, pgprot_t prot,
 struct page **pages, unsigned int page_shift)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_vunmap_range_noflush(unsigned long start,
           unsigned long end)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) kmsan_ioremap_page_range(unsigned long start,
       unsigned long end,
       phys_addr_t phys_addr,
       pgprot_t prot,
       unsigned int page_shift)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_iounmap_page_range(unsigned long start,
         unsigned long end)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_handle_dma(struct page *page, size_t offset,
        size_t size, enum dma_data_direction dir)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
           enum dma_data_direction dir)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_handle_urb(const struct urb *urb, bool is_out)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_unpoison_entry_regs(const struct pt_regs *regs)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_enable_current(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmsan_disable_current(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *memset_no_sanitize_memory(void *s, int c, size_t n)
{
 return memset(s, c, n);
}
# 10 "../include/linux/highmem.h" 2
# 1 "../include/linux/mm.h" 1







# 1 "../include/linux/pgalloc_tag.h" 1
# 127 "../include/linux/pgalloc_tag.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) union codetag_ref *get_page_tag_ref(struct page *page) { return ((void *)0); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_page_tag_ref(union codetag_ref *ref) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgalloc_tag_add(struct page *page, struct task_struct *task,
       unsigned int nr) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgalloc_tag_sub(struct page *page, unsigned int nr) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgalloc_tag_split(struct page *page, unsigned int nr) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct alloc_tag *pgalloc_tag_get(struct page *page) { return ((void *)0); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
# 9 "../include/linux/mm.h" 2







# 1 "../include/linux/mmap_lock.h" 1
# 14 "../include/linux/mmap_lock.h"
extern struct tracepoint __tracepoint_mmap_lock_start_locking;
extern struct tracepoint __tracepoint_mmap_lock_acquire_returned;
extern struct tracepoint __tracepoint_mmap_lock_released;



void __mmap_lock_do_trace_start_locking(struct mm_struct *mm, bool write);
void __mmap_lock_do_trace_acquire_returned(struct mm_struct *mm, bool write,
        bool success);
void __mmap_lock_do_trace_released(struct mm_struct *mm, bool write);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mmap_lock_trace_start_locking(struct mm_struct *mm,
         bool write)
{
 if (static_key_false(&(__tracepoint_mmap_lock_start_locking).key))
  __mmap_lock_do_trace_start_locking(mm, write);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mmap_lock_trace_acquire_returned(struct mm_struct *mm,
            bool write, bool success)
{
 if (static_key_false(&(__tracepoint_mmap_lock_acquire_returned).key))
  __mmap_lock_do_trace_acquire_returned(mm, write, success);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mmap_lock_trace_released(struct mm_struct *mm, bool write)
{
 if (static_key_false(&(__tracepoint_mmap_lock_released).key))
  __mmap_lock_do_trace_released(mm, write);
}
# 63 "../include/linux/mmap_lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_assert_locked(const struct mm_struct *mm)
{
 rwsem_assert_held(&mm->mmap_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_assert_write_locked(const struct mm_struct *mm)
{
 rwsem_assert_held_write(&mm->mmap_lock);
}
# 95 "../include/linux/mmap_lock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_end_write_all(struct mm_struct *mm) {}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_init_lock(struct mm_struct *mm)
{
 do { static struct lock_class_key __key; __init_rwsem((&mm->mmap_lock), "&mm->mmap_lock", &__key); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_write_lock(struct mm_struct *mm)
{
 __mmap_lock_trace_start_locking(mm, true);
 down_write(&mm->mmap_lock);
 __mmap_lock_trace_acquire_returned(mm, true, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_write_lock_nested(struct mm_struct *mm, int subclass)
{
 __mmap_lock_trace_start_locking(mm, true);
 down_write_nested(&mm->mmap_lock, subclass);
 __mmap_lock_trace_acquire_returned(mm, true, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mmap_write_lock_killable(struct mm_struct *mm)
{
 int ret;

 __mmap_lock_trace_start_locking(mm, true);
 ret = down_write_killable(&mm->mmap_lock);
 __mmap_lock_trace_acquire_returned(mm, true, ret == 0);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_write_unlock(struct mm_struct *mm)
{
 __mmap_lock_trace_released(mm, true);
 vma_end_write_all(mm);
 up_write(&mm->mmap_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_write_downgrade(struct mm_struct *mm)
{
 __mmap_lock_trace_acquire_returned(mm, false, true);
 vma_end_write_all(mm);
 downgrade_write(&mm->mmap_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_read_lock(struct mm_struct *mm)
{
 __mmap_lock_trace_start_locking(mm, false);
 down_read(&mm->mmap_lock);
 __mmap_lock_trace_acquire_returned(mm, false, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mmap_read_lock_killable(struct mm_struct *mm)
{
 int ret;

 __mmap_lock_trace_start_locking(mm, false);
 ret = down_read_killable(&mm->mmap_lock);
 __mmap_lock_trace_acquire_returned(mm, false, ret == 0);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mmap_read_trylock(struct mm_struct *mm)
{
 bool ret;

 __mmap_lock_trace_start_locking(mm, false);
 ret = down_read_trylock(&mm->mmap_lock) != 0;
 __mmap_lock_trace_acquire_returned(mm, false, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_read_unlock(struct mm_struct *mm)
{
 __mmap_lock_trace_released(mm, false);
 up_read(&mm->mmap_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmap_read_unlock_non_owner(struct mm_struct *mm)
{
 __mmap_lock_trace_released(mm, false);
 up_read_non_owner(&mm->mmap_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mmap_lock_is_contended(struct mm_struct *mm)
{
 return rwsem_is_contended(&mm->mmap_lock);
}
# 17 "../include/linux/mm.h" 2
# 1 "../include/linux/range.h" 1





struct range {
 u64 start;
 u64 end;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 range_len(const struct range *range)
{
 return range->end - range->start + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool range_contains(struct range *r1, struct range *r2)
{
 return r1->start <= r2->start && r1->end >= r2->end;
}

int add_range(struct range *range, int az, int nr_range,
  u64 start, u64 end);


int add_range_with_merge(struct range *range, int az, int nr_range,
    u64 start, u64 end);

void subtract_range(struct range *range, int az, u64 start, u64 end);

int clean_sort_range(struct range *range, int az);

void sort_range(struct range *range, int nr_range);
# 18 "../include/linux/mm.h" 2





# 1 "../include/linux/page_ext.h" 1







struct pglist_data;
# 24 "../include/linux/page_ext.h"
struct page_ext_operations {
 size_t offset;
 size_t size;
 bool (*need)(void);
 void (*init)(void);
 bool need_shared_flags;
};




enum page_ext_flags {
 PAGE_EXT_OWNER,
 PAGE_EXT_OWNER_ALLOCATED,

 PAGE_EXT_YOUNG,
 PAGE_EXT_IDLE,

};
# 51 "../include/linux/page_ext.h"
struct page_ext {
 unsigned long flags;
};

extern bool early_page_ext;
extern unsigned long page_ext_size;
extern void pgdat_page_ext_init(struct pglist_data *pgdat);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool early_page_ext_enabled(void)
{
 return early_page_ext;
}
# 73 "../include/linux/page_ext.h"
extern void page_ext_init_flatmem(void);
extern void page_ext_init_flatmem_late(void);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ext_init(void)
{
}


extern struct page_ext *page_ext_get(const struct page *page);
extern void page_ext_put(struct page_ext *page_ext);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *page_ext_data(struct page_ext *page_ext,
      struct page_ext_operations *ops)
{
 return (void *)(page_ext) + ops->offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page_ext *page_ext_next(struct page_ext *curr)
{
 void *next = curr;
 next += page_ext_size;
 return next;
}
# 24 "../include/linux/mm.h" 2


# 1 "../include/linux/page_ref.h" 1
# 10 "../include/linux/page_ref.h"
extern struct tracepoint __tracepoint_page_ref_set;
extern struct tracepoint __tracepoint_page_ref_mod;
extern struct tracepoint __tracepoint_page_ref_mod_and_test;
extern struct tracepoint __tracepoint_page_ref_mod_and_return;
extern struct tracepoint __tracepoint_page_ref_mod_unless;
extern struct tracepoint __tracepoint_page_ref_freeze;
extern struct tracepoint __tracepoint_page_ref_unfreeze;
# 41 "../include/linux/page_ref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_set(struct page *page, int v)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_mod(struct page *page, int v)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_mod_and_test(struct page *page, int v, int ret)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_mod_and_return(struct page *page, int v, int ret)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_mod_unless(struct page *page, int v, int u)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_freeze(struct page *page, int v, int ret)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __page_ref_unfreeze(struct page *page, int v)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_count(const struct page *page)
{
 return atomic_read(&page->_refcount);
}
# 87 "../include/linux/page_ref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_count(const struct folio *folio)
{
 return page_ref_count(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_count(const struct page *page)
{
 return folio_ref_count((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_count(struct page *page, int v)
{
 atomic_set(&page->_refcount, v);
 if (false)
  __page_ref_set(page, v);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_set_count(struct folio *folio, int v)
{
 set_page_count(&folio->page, v);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_page_count(struct page *page)
{
 set_page_count(page, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ref_add(struct page *page, int nr)
{
 atomic_add(nr, &page->_refcount);
 if (false)
  __page_ref_mod(page, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_ref_add(struct folio *folio, int nr)
{
 page_ref_add(&folio->page, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ref_sub(struct page *page, int nr)
{
 atomic_sub(nr, &page->_refcount);
 if (false)
  __page_ref_mod(page, -nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_ref_sub(struct folio *folio, int nr)
{
 page_ref_sub(&folio->page, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_sub_return(struct folio *folio, int nr)
{
 int ret = atomic_sub_return(nr, &folio->_refcount);

 if (false)
  __page_ref_mod_and_return(&folio->page, -nr, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ref_inc(struct page *page)
{
 atomic_inc(&page->_refcount);
 if (false)
  __page_ref_mod(page, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_ref_inc(struct folio *folio)
{
 page_ref_inc(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ref_dec(struct page *page)
{
 atomic_dec(&page->_refcount);
 if (false)
  __page_ref_mod(page, -1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_ref_dec(struct folio *folio)
{
 page_ref_dec(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_sub_and_test(struct page *page, int nr)
{
 int ret = atomic_sub_and_test(nr, &page->_refcount);

 if (false)
  __page_ref_mod_and_test(page, -nr, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_sub_and_test(struct folio *folio, int nr)
{
 return page_ref_sub_and_test(&folio->page, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_inc_return(struct page *page)
{
 int ret = atomic_inc_return(&page->_refcount);

 if (false)
  __page_ref_mod_and_return(page, 1, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_inc_return(struct folio *folio)
{
 return page_ref_inc_return(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_dec_and_test(struct page *page)
{
 int ret = atomic_dec_and_test(&page->_refcount);

 if (false)
  __page_ref_mod_and_test(page, -1, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_dec_and_test(struct folio *folio)
{
 return page_ref_dec_and_test(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_dec_return(struct page *page)
{
 int ret = atomic_dec_return(&page->_refcount);

 if (false)
  __page_ref_mod_and_return(page, -1, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_dec_return(struct folio *folio)
{
 return page_ref_dec_return(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_ref_add_unless(struct page *page, int nr, int u)
{
 bool ret = false;

 rcu_read_lock();

 if (!page_is_fake_head(page) && page_ref_count(page) != u)
  ret = atomic_add_unless(&page->_refcount, nr, u);
 rcu_read_unlock();

 if (false)
  __page_ref_mod_unless(page, nr, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_ref_add_unless(struct folio *folio, int nr, int u)
{
 return page_ref_add_unless(&folio->page, nr, u);
}
# 262 "../include/linux/page_ref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_try_get(struct folio *folio)
{
 return folio_ref_add_unless(folio, 1, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_ref_try_add(struct folio *folio, int count)
{
 return folio_ref_add_unless(folio, count, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_ref_freeze(struct page *page, int count)
{
 int ret = __builtin_expect(!!(atomic_cmpxchg(&page->_refcount, count, 0) == count), 1);

 if (false)
  __page_ref_freeze(page, count, ret);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_ref_freeze(struct folio *folio, int count)
{
 return page_ref_freeze(&folio->page, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_ref_unfreeze(struct page *page, int count)
{
 do { if (__builtin_expect(!!(page_count(page) != 0), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "page_count(page) != 0"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page_ref.h", 288, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0);
 do { if (__builtin_expect(!!(count == 0), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/page_ref.h", 289, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 atomic_set_release(&page->_refcount, count);
 if (false)
  __page_ref_unfreeze(page, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_ref_unfreeze(struct folio *folio, int count)
{
 page_ref_unfreeze(&folio->page, count);
}
# 27 "../include/linux/mm.h" 2

# 1 "../include/linux/sizes.h" 1
# 29 "../include/linux/mm.h" 2

# 1 "../include/linux/pgtable.h" 1





# 1 "../arch/hexagon/include/asm/pgtable.h" 1
# 15 "../arch/hexagon/include/asm/pgtable.h"
# 1 "../include/asm-generic/pgtable-nopmd.h" 1






# 1 "../include/asm-generic/pgtable-nopud.h" 1






# 1 "../include/asm-generic/pgtable-nop4d.h" 1








typedef struct { pgd_t pgd; } p4d_t;
# 21 "../include/asm-generic/pgtable-nop4d.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_none(pgd_t pgd) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_bad(pgd_t pgd) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_present(pgd_t pgd) { return 1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pgd_clear(pgd_t *pgd) { }
# 35 "../include/asm-generic/pgtable-nop4d.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
{
 return (p4d_t *)pgd;
}
# 8 "../include/asm-generic/pgtable-nopud.h" 2








typedef struct { p4d_t p4d; } pud_t;
# 28 "../include/asm-generic/pgtable-nopud.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_none(p4d_t p4d) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_bad(p4d_t p4d) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_present(p4d_t p4d) { return 1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void p4d_clear(p4d_t *p4d) { }
# 42 "../include/asm-generic/pgtable-nopud.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pud_t *pud_offset(p4d_t *p4d, unsigned long address)
{
 return (pud_t *)p4d;
}
# 8 "../include/asm-generic/pgtable-nopmd.h" 2

struct mm_struct;
# 18 "../include/asm-generic/pgtable-nopmd.h"
typedef struct { pud_t pud; } pmd_t;
# 30 "../include/asm-generic/pgtable-nopmd.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_none(pud_t pud) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_bad(pud_t pud) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_present(pud_t pud) { return 1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_user(pud_t pud) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_leaf(pud_t pud) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pud_clear(pud_t *pud) { }
# 46 "../include/asm-generic/pgtable-nopmd.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t * pmd_offset(pud_t * pud, unsigned long address)
{
 return (pmd_t *)pud;
}
# 63 "../include/asm-generic/pgtable-nopmd.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
}
# 16 "../arch/hexagon/include/asm/pgtable.h" 2


extern unsigned long empty_zero_page;
# 27 "../arch/hexagon/include/asm/pgtable.h"
# 1 "../arch/hexagon/include/asm/vm_mmu.h" 1
# 28 "../arch/hexagon/include/asm/pgtable.h" 2
# 107 "../arch/hexagon/include/asm/pgtable.h"
extern unsigned long _dflt_cache_att;
# 132 "../arch/hexagon/include/asm/pgtable.h"
extern pgd_t swapper_pg_dir[1024];
# 143 "../arch/hexagon/include/asm/pgtable.h"
extern void sync_icache_dcache(pte_t pte);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_pte(pte_t *ptep, pte_t pteval)
{

 if (((((pteval).pte) & ((1<<11) | (1<<5))) == ((1<<11) | (1<<5))))
  sync_icache_dcache(pteval);

 *ptep = pteval;
}
# 168 "../arch/hexagon/include/asm/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pmd_clear(pmd_t *pmd_entry_ptr)
{
  ((((((((*pmd_entry_ptr).pud).p4d).pgd).pgd)))) = 0x7;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pte_clear(struct mm_struct *mm, unsigned long addr,
    pte_t *ptep)
{
 ((*ptep).pte) = 0x0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_none(pmd_t pmd)
{
 return ((((((((pmd).pud).p4d).pgd).pgd)))) == 0x7;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_present(pmd_t pmd)
{
 return ((((((((pmd).pud).p4d).pgd).pgd)))) != (unsigned long)0x7;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_bad(pmd_t pmd)
{
 return 0;
}
# 228 "../arch/hexagon/include/asm/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_none(pte_t pte)
{
 return ((pte).pte) == 0x0;
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_present(pte_t pte)
{
 return ((pte).pte) & (1<<0);
}
# 248 "../arch/hexagon/include/asm/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkold(pte_t pte)
{
 ((pte).pte) &= ~(1<<2);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkyoung(pte_t pte)
{
 ((pte).pte) |= (1<<2);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkclean(pte_t pte)
{
 ((pte).pte) &= ~(1<<1);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkdirty(pte_t pte)
{
 ((pte).pte) |= (1<<1);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_young(pte_t pte)
{
 return ((pte).pte) & (1<<2);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_dirty(pte_t pte)
{
 return ((pte).pte) & (1<<1);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_modify(pte_t pte, pgprot_t prot)
{
 ((pte).pte) &= (~((1 << 14) - 1));
 ((pte).pte) |= ((prot).pgprot);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_wrprotect(pte_t pte)
{
 ((pte).pte) &= ~(1<<10);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkwrite_novma(pte_t pte)
{
 ((pte).pte) |= (1<<10);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkexec(pte_t pte)
{
 ((pte).pte) |= (1<<11);
 return pte;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_read(pte_t pte)
{
 return ((pte).pte) & (1<<9);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_write(pte_t pte)
{
 return ((pte).pte) & (1<<10);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_exec(pte_t pte)
{
 return ((pte).pte) & (1<<11);
}
# 349 "../arch/hexagon/include/asm/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pmd_page_vaddr(pmd_t pmd)
{
 return (unsigned long)((void *)((unsigned long)(((((((((pmd).pud).p4d).pgd).pgd)))) & (~((1 << 14) - 1))) - __phys_offset + (0xc0000000UL)));
}
# 393 "../arch/hexagon/include/asm/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_swp_exclusive(pte_t pte)
{
 return ((pte).pte) & (1<<6);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_swp_mkexclusive(pte_t pte)
{
 ((pte).pte) |= (1<<6);
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_swp_clear_exclusive(pte_t pte)
{
 ((pte).pte) &= ~(1<<6);
 return pte;
}
# 7 "../include/linux/pgtable.h" 2
# 17 "../include/linux/pgtable.h"
# 1 "../include/asm-generic/pgtable_uffd.h" 1




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int pte_uffd_wp(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int pmd_uffd_wp(pmd_t pmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pte_t pte_mkuffd_wp(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pmd_t pmd_mkuffd_wp(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pte_t pte_clear_uffd_wp(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pmd_t pmd_clear_uffd_wp(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pte_t pte_swp_mkuffd_wp(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int pte_swp_uffd_wp(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) pte_t pte_swp_clear_uffd_wp(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_swp_mkuffd_wp(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_swp_uffd_wp(pmd_t pmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_swp_clear_uffd_wp(pmd_t pmd)
{
 return pmd;
}
# 18 "../include/linux/pgtable.h" 2
# 1 "../include/linux/page_table_check.h" 1
# 107 "../include/linux/page_table_check.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_alloc(struct page *page, unsigned int order)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_free(struct page *page, unsigned int order)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pte_clear(struct mm_struct *mm, pte_t pte)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pmd_clear(struct mm_struct *mm, pmd_t pmd)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pud_clear(struct mm_struct *mm, pud_t pud)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_ptes_set(struct mm_struct *mm,
  pte_t *ptep, pte_t pte, unsigned int nr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp,
         pmd_t pmd)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp,
         pud_t pud)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_table_check_pte_clear_range(struct mm_struct *mm,
          unsigned long addr,
          pmd_t pmd)
{
}
# 19 "../include/linux/pgtable.h" 2
# 67 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pte_index(unsigned long address)
{
 return (address >> 14) & (256 - 1);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pmd_index(unsigned long address)
{
 return (address >> 22) & (1 - 1);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pud_index(unsigned long address)
{
 return (address >> 22) & (1 - 1);
}
# 94 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
{
 return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address);
}
# 109 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *__pte_map(pmd_t *pmd, unsigned long address)
{
 return pte_offset_kernel(pmd, address);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pte_unmap(pte_t *pte)
{
 rcu_read_unlock();
}


void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
# 138 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pgd_t *pgd_offset_pgd(pgd_t *pgd, unsigned long address)
{
 return (pgd + (((address) >> 22) & (1024 - 1)));
};
# 163 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t *pmd_off(struct mm_struct *mm, unsigned long va)
{
 return pmd_offset(pud_offset(p4d_offset(pgd_offset_pgd((mm)->pgd, (va)), va), va), va);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t *pmd_off_k(unsigned long va)
{
 return pmd_offset(pud_offset(p4d_offset(pgd_offset_pgd((&init_mm)->pgd, ((va))), va), va), va);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *virt_to_kpte(unsigned long vaddr)
{
 pmd_t *pmd = pmd_off_k(vaddr);

 return pmd_none(*pmd) ? ((void *)0) : pte_offset_kernel(pmd, vaddr);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_young(pmd_t pmd)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_dirty(pmd_t pmd)
{
 return 0;
}
# 230 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int pte_batch_hint(pte_t *ptep, pte_t pte)
{
 return 1;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_advance_pfn(pte_t pte, unsigned long nr)
{
 return ((pte_t) { (((pte).pte) + (nr << 14)) });
}
# 264 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_ptes(struct mm_struct *mm, unsigned long addr,
  pte_t *ptep, pte_t pte, unsigned int nr)
{
 page_table_check_ptes_set(mm, ptep, pte, nr);

 do {} while (0);
 for (;;) {
  set_pte(ptep, pte);
  if (--nr == 0)
   break;
  ptep++;
  pte = pte_advance_pfn(pte, 1);
 }
 do {} while (0);
}




extern int ptep_set_access_flags(struct vm_area_struct *vma,
     unsigned long address, pte_t *ptep,
     pte_t entry, int dirty);
# 297 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmdp_set_access_flags(struct vm_area_struct *vma,
     unsigned long address, pmd_t *pmdp,
     pmd_t entry, int dirty)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_232(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_232(); } while (0);
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pudp_set_access_flags(struct vm_area_struct *vma,
     unsigned long address, pud_t *pudp,
     pud_t entry, int dirty)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_233(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_233(); } while (0);
 return 0;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t ptep_get(pte_t *ptep)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_234(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*ptep) == sizeof(char) || sizeof(*ptep) == sizeof(short) || sizeof(*ptep) == sizeof(int) || sizeof(*ptep) == sizeof(long)) || sizeof(*ptep) == sizeof(long long))) __compiletime_assert_234(); } while (0); (*(const volatile typeof( _Generic((*ptep), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*ptep))) *)&(*ptep)); });
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmdp_get(pmd_t *pmdp)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_235(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pmdp) == sizeof(char) || sizeof(*pmdp) == sizeof(short) || sizeof(*pmdp) == sizeof(int) || sizeof(*pmdp) == sizeof(long)) || sizeof(*pmdp) == sizeof(long long))) __compiletime_assert_235(); } while (0); (*(const volatile typeof( _Generic((*pmdp), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*pmdp))) *)&(*pmdp)); });
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pud_t pudp_get(pud_t *pudp)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_236(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pudp) == sizeof(char) || sizeof(*pudp) == sizeof(short) || sizeof(*pudp) == sizeof(int) || sizeof(*pudp) == sizeof(long)) || sizeof(*pudp) == sizeof(long long))) __compiletime_assert_236(); } while (0); (*(const volatile typeof( _Generic((*pudp), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*pudp))) *)&(*pudp)); });
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) p4d_t p4dp_get(p4d_t *p4dp)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_237(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*p4dp) == sizeof(char) || sizeof(*p4dp) == sizeof(short) || sizeof(*p4dp) == sizeof(int) || sizeof(*p4dp) == sizeof(long)) || sizeof(*p4dp) == sizeof(long long))) __compiletime_assert_237(); } while (0); (*(const volatile typeof( _Generic((*p4dp), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*p4dp))) *)&(*p4dp)); });
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pgd_t pgdp_get(pgd_t *pgdp)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_238(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*pgdp) == sizeof(char) || sizeof(*pgdp) == sizeof(short) || sizeof(*pgdp) == sizeof(int) || sizeof(*pgdp) == sizeof(long)) || sizeof(*pgdp) == sizeof(long long))) __compiletime_assert_238(); } while (0); (*(const volatile typeof( _Generic((*pgdp), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*pgdp))) *)&(*pgdp)); });
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ptep_test_and_clear_young(struct vm_area_struct *vma,
         unsigned long address,
         pte_t *ptep)
{
 pte_t pte = ptep_get(ptep);
 int r = 1;
 if (!pte_young(pte))
  r = 0;
 else
  set_ptes(vma->vm_mm, address, ptep, pte_mkold(pte), 1);
 return r;
}
# 379 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmdp_test_and_clear_young(struct vm_area_struct *vma,
         unsigned long address,
         pmd_t *pmdp)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_239(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_239(); } while (0);
 return 0;
}




int ptep_clear_flush_young(struct vm_area_struct *vma,
      unsigned long address, pte_t *ptep);
# 403 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmdp_clear_flush_young(struct vm_area_struct *vma,
      unsigned long address, pmd_t *pmdp)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_240(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_240(); } while (0);
 return 0;
}
# 417 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_has_hw_nonleaf_pmd_young(void)
{
 return 0;
}
# 430 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_has_hw_pte_young(void)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_check_zapped_pte(struct vm_area_struct *vma,
      pte_t pte)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_check_zapped_pmd(struct vm_area_struct *vma,
      pmd_t pmd)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t ptep_get_and_clear(struct mm_struct *mm,
           unsigned long address,
           pte_t *ptep)
{
 pte_t pte = ptep_get(ptep);
 pte_clear(mm, address, ptep);
 page_table_check_pte_clear(mm, pte);
 return pte;
}
# 481 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_young_dirty_ptes(struct vm_area_struct *vma,
       unsigned long addr, pte_t *ptep,
       unsigned int nr, cydp_t flags)
{
 pte_t pte;

 for (;;) {
  if (flags == (( cydp_t)((((1UL))) << (0))))
   ptep_test_and_clear_young(vma, addr, ptep);
  else {
   pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
   if (flags & (( cydp_t)((((1UL))) << (0))))
    pte = pte_mkold(pte);
   if (flags & (( cydp_t)((((1UL))) << (1))))
    pte = pte_mkclean(pte);
   set_ptes(vma->vm_mm, addr, ptep, pte, 1);
  }
  if (--nr == 0)
   break;
  ptep++;
  addr += (1UL << 14);
 }
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptep_clear(struct mm_struct *mm, unsigned long addr,
         pte_t *ptep)
{
 ptep_get_and_clear(mm, addr, ptep);
}
# 579 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t ptep_get_lockless(pte_t *ptep)
{
 return ptep_get(ptep);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmdp_get_lockless(pmd_t *pmdp)
{
 return pmdp_get(pmdp);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pmdp_get_lockless_sync(void)
{
}
# 645 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t ptep_get_and_clear_full(struct mm_struct *mm,
         unsigned long address, pte_t *ptep,
         int full)
{
 return ptep_get_and_clear(mm, address, ptep);
}
# 673 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t get_and_clear_full_ptes(struct mm_struct *mm,
  unsigned long addr, pte_t *ptep, unsigned int nr, int full)
{
 pte_t pte, tmp_pte;

 pte = ptep_get_and_clear_full(mm, addr, ptep, full);
 while (--nr) {
  ptep++;
  addr += (1UL << 14);
  tmp_pte = ptep_get_and_clear_full(mm, addr, ptep, full);
  if (pte_dirty(tmp_pte))
   pte = pte_mkdirty(pte);
  if (pte_young(tmp_pte))
   pte = pte_mkyoung(pte);
 }
 return pte;
}
# 711 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_full_ptes(struct mm_struct *mm, unsigned long addr,
  pte_t *ptep, unsigned int nr, int full)
{
 for (;;) {
  ptep_get_and_clear_full(mm, addr, ptep, full);
  if (--nr == 0)
   break;
  ptep++;
  addr += (1UL << 14);
 }
}
# 733 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_mmu_tlb_range(struct vm_area_struct *vma,
    unsigned long address, pte_t *ptep, unsigned int nr)
{
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_mmu_tlb(struct vm_area_struct *vma,
    unsigned long address, pte_t *ptep)
{
 update_mmu_tlb_range(vma, address, ptep, 1);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pte_clear_not_present_full(struct mm_struct *mm,
           unsigned long address,
           pte_t *ptep,
           int full)
{
 pte_clear(mm, address, ptep);
}
# 776 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_not_present_full_ptes(struct mm_struct *mm,
  unsigned long addr, pte_t *ptep, unsigned int nr, int full)
{
 for (;;) {
  pte_clear_not_present_full(mm, addr, ptep, full);
  if (--nr == 0)
   break;
  ptep++;
  addr += (1UL << 14);
 }
}



extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
         unsigned long address,
         pte_t *ptep);



extern pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
         unsigned long address,
         pmd_t *pmdp);
extern pud_t pudp_huge_clear_flush(struct vm_area_struct *vma,
         unsigned long address,
         pud_t *pudp);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
{
 return pte_mkwrite_novma(pte);
}
# 819 "../include/linux/pgtable.h"
struct mm_struct;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep)
{
 pte_t old_pte = ptep_get(ptep);
 set_ptes(mm, address, ptep, pte_wrprotect(old_pte), 1);
}
# 845 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
  pte_t *ptep, unsigned int nr)
{
 for (;;) {
  ptep_set_wrprotect(mm, addr, ptep);
  if (--nr == 0)
   break;
  ptep++;
  addr += (1UL << 14);
 }
}
# 867 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_sw_mkyoung(pte_t pte)
{
 return pte;
}
# 883 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pmdp_set_wrprotect(struct mm_struct *mm,
          unsigned long address, pmd_t *pmdp)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_241(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_241(); } while (0);
}
# 915 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
     unsigned long address,
     pmd_t *pmdp)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_242(void) __attribute__((__error__("BUILD_BUG failed"))); if (!(!(1))) __compiletime_assert_242(); } while (0);
 return *pmdp;
}





extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
           pgtable_t pgtable);



extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
# 955 "../include/linux/pgtable.h"
extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
       pmd_t *pmdp);
# 975 "../include/linux/pgtable.h"
extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma,
    unsigned long address, pmd_t *pmdp);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_same(pte_t pte_a, pte_t pte_b)
{
 return ((pte_a).pte) == ((pte_b).pte);
}
# 993 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_unused(pte_t pte)
{
 return 0;
}
# 1025 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
 return ((((((((pmd_a).pud).p4d).pgd).pgd)))) == ((((((((pmd_b).pud).p4d).pgd).pgd))));
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_same(pud_t pud_a, pud_t pud_b)
{
 return ((((((pud_a).p4d).pgd).pgd))) == ((((((pud_b).p4d).pgd).pgd)));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_same(p4d_t p4d_a, p4d_t p4d_b)
{
 return ((((p4d_a).pgd).pgd)) == ((((p4d_b).pgd).pgd));
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_same(pgd_t pgd_a, pgd_t pgd_b)
{
 return ((pgd_a).pgd) == ((pgd_b).pgd);
}
# 1092 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_do_swap_page_nr(struct mm_struct *mm,
         struct vm_area_struct *vma,
         unsigned long addr,
         pte_t pte, pte_t oldpte,
         int nr)
{

}
# 1132 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_unmap_one(struct mm_struct *mm,
      struct vm_area_struct *vma,
      unsigned long addr,
      pte_t orig_pte)
{
 return 0;
}
# 1147 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_prepare_to_swap(struct folio *folio)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_swap_invalidate_page(int type, unsigned long offset)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_swap_invalidate_area(int type)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_swap_restore(swp_entry_t entry, struct folio *folio)
{
}
# 1222 "../include/linux/pgtable.h"
void pgd_clear_bad(pgd_t *);
# 1236 "../include/linux/pgtable.h"
void pmd_clear_bad(pmd_t *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_none_or_clear_bad(pgd_t *pgd)
{
 if (pgd_none(*pgd))
  return 1;
 if (__builtin_expect(!!(pgd_bad(*pgd)), 0)) {
  pgd_clear_bad(pgd);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_none_or_clear_bad(p4d_t *p4d)
{
 if (p4d_none(*p4d))
  return 1;
 if (__builtin_expect(!!(p4d_bad(*p4d)), 0)) {
  do { } while (0);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_none_or_clear_bad(pud_t *pud)
{
 if (pud_none(*pud))
  return 1;
 if (__builtin_expect(!!(pud_bad(*pud)), 0)) {
  do { } while (0);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_none_or_clear_bad(pmd_t *pmd)
{
 if (pmd_none(*pmd))
  return 1;
 if (__builtin_expect(!!(pmd_bad(*pmd)), 0)) {
  pmd_clear_bad(pmd);
  return 1;
 }
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t __ptep_modify_prot_start(struct vm_area_struct *vma,
          unsigned long addr,
          pte_t *ptep)
{





 return ptep_get_and_clear(vma->vm_mm, addr, ptep);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __ptep_modify_prot_commit(struct vm_area_struct *vma,
          unsigned long addr,
          pte_t *ptep, pte_t pte)
{




 set_ptes(vma->vm_mm, addr, ptep, pte, 1);
}
# 1320 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
        unsigned long addr,
        pte_t *ptep)
{
 return __ptep_modify_prot_start(vma, addr, ptep);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptep_modify_prot_commit(struct vm_area_struct *vma,
        unsigned long addr,
        pte_t *ptep, pte_t old_pte, pte_t pte)
{
 __ptep_modify_prot_commit(vma, addr, ptep, pte);
}
# 1372 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
{
 if (((oldprot).pgprot) == (((oldprot)).pgprot))
  newprot = (newprot);
 if (((oldprot).pgprot) == (((oldprot)).pgprot))
  newprot = (newprot);
 if (((oldprot).pgprot) == (((oldprot)).pgprot))
  newprot = (newprot);
 return newprot;
}
# 1426 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_soft_dirty(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_soft_dirty(pmd_t pmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mksoft_dirty(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_mksoft_dirty(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_clear_soft_dirty(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_clear_soft_dirty(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_swp_mksoft_dirty(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_swp_soft_dirty(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_swp_clear_soft_dirty(pte_t pte)
{
 return pte;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
 return pmd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_swp_soft_dirty(pmd_t pmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
{
 return pmd;
}
# 1498 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
      unsigned long pfn, unsigned long addr,
      unsigned long size)
{
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
        pfn_t pfn)
{
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int track_pfn_copy(struct vm_area_struct *vma)
{
 return 0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void untrack_pfn(struct vm_area_struct *vma,
          unsigned long pfn, unsigned long size,
          bool mm_wr_locked)
{
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void untrack_pfn_clear(struct vm_area_struct *vma)
{
}
# 1565 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_zero_pfn(unsigned long pfn)
{
 extern unsigned long zero_pfn;
 return pfn == zero_pfn;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long my_zero_pfn(unsigned long addr)
{
 extern unsigned long zero_pfn;
 return zero_pfn;
}
# 1592 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_trans_huge(pmd_t pmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_write(pmd_t pmd)
{
 do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/pgtable.h", 1599, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 return 0;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_write(pud_t pud)
{
 do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/pgtable.h", 1608, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_devmap(pmd_t pmd)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_devmap(pud_t pud)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pgd_devmap(pgd_t pgd)
{
 return 0;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_trans_huge(pud_t pud)
{
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_trans_unstable(pud_t *pud)
{
# 1649 "../include/linux/pgtable.h"
 return 0;
}
# 1665 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_protnone(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_protnone(pmd_t pmd)
{
 return 0;
}
# 1699 "../include/linux/pgtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void p4d_clear_huge(p4d_t *p4d) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_clear_huge(pud_t *pud)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_clear_huge(pmd_t *pmd)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int p4d_free_pud_page(p4d_t *p4d, unsigned long addr)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
 return 0;
}
# 1753 "../include/linux/pgtable.h"
struct file;
int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
   unsigned long size, pgprot_t *vma_prot);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_espfix_bsp(void) { }


extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pgtable_cache_init(void);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_has_pfn_modify_check(void)
{
 return false;
}
# 1814 "../include/linux/pgtable.h"
typedef unsigned int pgtbl_mod_mask;
# 31 "../include/linux/mm.h" 2

# 1 "../include/linux/memremap.h" 1
# 10 "../include/linux/memremap.h"
struct resource;
struct device;
# 21 "../include/linux/memremap.h"
struct vmem_altmap {
 unsigned long base_pfn;
 const unsigned long end_pfn;
 const unsigned long reserve;
 unsigned long free;
 unsigned long align;
 unsigned long alloc;
 bool inaccessible;
};
# 69 "../include/linux/memremap.h"
enum memory_type {

 MEMORY_DEVICE_PRIVATE = 1,
 MEMORY_DEVICE_COHERENT,
 MEMORY_DEVICE_FS_DAX,
 MEMORY_DEVICE_GENERIC,
 MEMORY_DEVICE_PCI_P2PDMA,
};

struct dev_pagemap_ops {





 void (*page_free)(struct page *page);





 vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
# 101 "../include/linux/memremap.h"
 int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn,
         unsigned long nr_pages, int mf_flags);
};
# 127 "../include/linux/memremap.h"
struct dev_pagemap {
 struct vmem_altmap altmap;
 struct percpu_ref ref;
 struct completion done;
 enum memory_type type;
 unsigned int flags;
 unsigned long vmemmap_shift;
 const struct dev_pagemap_ops *ops;
 void *owner;
 int nr_range;
 union {
  struct range range;
  struct { struct { } __empty_ranges; struct range ranges[]; };
 };
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pgmap_has_memory_failure(struct dev_pagemap *pgmap)
{
 return pgmap->ops && pgmap->ops->memory_failure;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap)
{
 if (pgmap->flags & (1 << 0))
  return &pgmap->altmap;
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long pgmap_vmemmap_nr(struct dev_pagemap *pgmap)
{
 return 1 << pgmap->vmemmap_shift;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_device_private_page(const struct page *page)
{
 return 0 &&
  is_zone_device_page(page) &&
  page->pgmap->type == MEMORY_DEVICE_PRIVATE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_device_private(const struct folio *folio)
{
 return is_device_private_page(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_pci_p2pdma_page(const struct page *page)
{
 return 0 &&
  is_zone_device_page(page) &&
  page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_device_coherent_page(const struct page *page)
{
 return is_zone_device_page(page) &&
  page->pgmap->type == MEMORY_DEVICE_COHERENT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_device_coherent(const struct folio *folio)
{
 return is_device_coherent_page(&folio->page);
}
# 202 "../include/linux/memremap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *devm_memremap_pages(struct device *dev,
  struct dev_pagemap *pgmap)
{





 ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/memremap.h", 210, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return ERR_PTR(-6);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void devm_memunmap_pages(struct device *dev,
  struct dev_pagemap *pgmap)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
  struct dev_pagemap *pgmap)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
{
 return false;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long memremap_compat_align(void)
{
 return (1UL << 14);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_dev_pagemap(struct dev_pagemap *pgmap)
{
 if (pgmap)
  percpu_ref_put(&pgmap->ref);
}
# 33 "../include/linux/mm.h" 2


struct mempolicy;
struct anon_vma;
struct anon_vma_chain;
struct user_struct;
struct pt_regs;
struct folio_batch;

extern int sysctl_page_lock_unfairness;

void mm_core_init(void);
void init_mm_internals(void);


extern unsigned long max_mapnr;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_max_mapnr(unsigned long limit)
{
 max_mapnr = limit;
}




extern atomic_long_t _totalram_pages;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long totalram_pages(void)
{
 return (unsigned long)atomic_long_read(&_totalram_pages);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void totalram_pages_inc(void)
{
 atomic_long_inc(&_totalram_pages);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void totalram_pages_dec(void)
{
 atomic_long_dec(&_totalram_pages);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void totalram_pages_add(long count)
{
 atomic_long_add(count, &_totalram_pages);
}

extern void * high_memory;
extern int page_cluster;
extern const int page_cluster_max;


extern int sysctl_legacy_va_layout;
# 198 "../include/linux/mm.h"
extern int sysctl_max_map_count;

extern unsigned long sysctl_user_reserve_kbytes;
extern unsigned long sysctl_admin_reserve_kbytes;

extern int sysctl_overcommit_memory;
extern int sysctl_overcommit_ratio;
extern unsigned long sysctl_overcommit_kbytes;

int overcommit_ratio_handler(const struct ctl_table *, int, void *, size_t *,
  loff_t *);
int overcommit_kbytes_handler(const struct ctl_table *, int, void *, size_t *,
  loff_t *);
int overcommit_policy_handler(const struct ctl_table *, int, void *, size_t *,
  loff_t *);
# 231 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *lru_to_folio(struct list_head *head)
{
 return ({ void *__mptr = (void *)((head)->prev); _Static_assert(__builtin_types_compatible_p(typeof(*((head)->prev)), typeof(((struct folio *)0)->lru)) || __builtin_types_compatible_p(typeof(*((head)->prev)), typeof(void)), "pointer type mismatch in container_of()"); ((struct folio *)(__mptr - __builtin_offsetof(struct folio, lru))); });
}

void setup_initial_init_mm(void *start_code, void *end_code,
      void *end_data, void *brk);
# 248 "../include/linux/mm.h"
struct vm_area_struct *vm_area_alloc(struct mm_struct *);
struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
void vm_area_free(struct vm_area_struct *);

void __vm_area_free(struct vm_area_struct *vma);
# 504 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fault_flag_allow_retry_first(enum fault_flag flags)
{
 return (flags & FAULT_FLAG_ALLOW_RETRY) &&
     (!(flags & FAULT_FLAG_TRIED));
}
# 533 "../include/linux/mm.h"
struct vm_fault {
 const struct {
  struct vm_area_struct *vma;
  gfp_t gfp_mask;
  unsigned long pgoff;
  unsigned long address;
  unsigned long real_address;
 };
 enum fault_flag flags;

 pmd_t *pmd;

 pud_t *pud;


 union {
  pte_t orig_pte;
  pmd_t orig_pmd;


 };

 struct page *cow_page;
 struct page *page;





 pte_t *pte;



 spinlock_t *ptl;



 pgtable_t prealloc_pte;






};






struct vm_operations_struct {
 void (*open)(struct vm_area_struct * area);




 void (*close)(struct vm_area_struct * area);

 int (*may_split)(struct vm_area_struct *area, unsigned long addr);
 int (*mremap)(struct vm_area_struct *area);





 int (*mprotect)(struct vm_area_struct *vma, unsigned long start,
   unsigned long end, unsigned long newflags);
 vm_fault_t (*fault)(struct vm_fault *vmf);
 vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order);
 vm_fault_t (*map_pages)(struct vm_fault *vmf,
   unsigned long start_pgoff, unsigned long end_pgoff);
 unsigned long (*pagesize)(struct vm_area_struct * area);



 vm_fault_t (*page_mkwrite)(struct vm_fault *vmf);


 vm_fault_t (*pfn_mkwrite)(struct vm_fault *vmf);





 int (*access)(struct vm_area_struct *vma, unsigned long addr,
        void *buf, int len, int write);




 const char *(*name)(struct vm_area_struct *vma);
# 654 "../include/linux/mm.h"
 struct page *(*find_special_page)(struct vm_area_struct *vma,
       unsigned long addr);
};
# 668 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_numab_state_init(struct vm_area_struct *vma) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_numab_state_free(struct vm_area_struct *vma) {}
# 796 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_start_read(struct vm_area_struct *vma)
  { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_end_read(struct vm_area_struct *vma) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_start_write(struct vm_area_struct *vma) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_assert_write_locked(struct vm_area_struct *vma)
  { mmap_assert_write_locked(vma->vm_mm); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_mark_detached(struct vm_area_struct *vma,
         bool detached) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
  unsigned long address)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_assert_locked(struct vm_area_struct *vma)
{
 mmap_assert_locked(vma->vm_mm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void release_fault_lock(struct vm_fault *vmf)
{
 mmap_read_unlock(vmf->vma->vm_mm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void assert_fault_locked(struct vm_fault *vmf)
{
 mmap_assert_locked(vmf->vma->vm_mm);
}



extern const struct vm_operations_struct vma_dummy_vm_ops;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
{
 memset(vma, 0, sizeof(*vma));
 vma->vm_mm = mm;
 vma->vm_ops = &vma_dummy_vm_ops;
 INIT_LIST_HEAD(&vma->anon_vma_chain);
 vma_mark_detached(vma, false);
 vma_numab_state_init(vma);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_init(struct vm_area_struct *vma,
     vm_flags_t flags)
{
 ((vma)->__vm_flags) = flags;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_reset(struct vm_area_struct *vma,
      vm_flags_t flags)
{
 vma_assert_write_locked(vma);
 vm_flags_init(vma, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_reset_once(struct vm_area_struct *vma,
           vm_flags_t flags)
{
 vma_assert_write_locked(vma);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_243(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((vma)->__vm_flags)) == sizeof(char) || sizeof(((vma)->__vm_flags)) == sizeof(short) || sizeof(((vma)->__vm_flags)) == sizeof(int) || sizeof(((vma)->__vm_flags)) == sizeof(long)) || sizeof(((vma)->__vm_flags)) == sizeof(long long))) __compiletime_assert_243(); } while (0); do { *(volatile typeof(((vma)->__vm_flags)) *)&(((vma)->__vm_flags)) = (flags); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_set(struct vm_area_struct *vma,
    vm_flags_t flags)
{
 vma_start_write(vma);
 ((vma)->__vm_flags) |= flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_clear(struct vm_area_struct *vma,
      vm_flags_t flags)
{
 vma_start_write(vma);
 ((vma)->__vm_flags) &= ~flags;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __vm_flags_mod(struct vm_area_struct *vma,
      vm_flags_t set, vm_flags_t clear)
{
 vm_flags_init(vma, (vma->vm_flags | set) & ~clear);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vm_flags_mod(struct vm_area_struct *vma,
    vm_flags_t set, vm_flags_t clear)
{
 vma_start_write(vma);
 __vm_flags_mod(vma, set, clear);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_set_anonymous(struct vm_area_struct *vma)
{
 vma->vm_ops = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_anonymous(struct vm_area_struct *vma)
{
 return !vma->vm_ops;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_initial_heap(const struct vm_area_struct *vma)
{
 return vma->vm_start < vma->vm_mm->brk &&
  vma->vm_end > vma->vm_mm->start_brk;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_initial_stack(const struct vm_area_struct *vma)
{





 return vma->vm_start <= vma->vm_mm->start_stack &&
  vma->vm_end >= vma->vm_mm->start_stack;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_temporary_stack(struct vm_area_struct *vma)
{
 int maybe_stack = vma->vm_flags & (0x00000100 | 0x00000000);

 if (!maybe_stack)
  return false;

 if ((vma->vm_flags & (0x00010000 | 0x00008000 | 0)) ==
      (0x00010000 | 0x00008000 | 0))
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_foreign(struct vm_area_struct *vma)
{
 if (!(__current_thread_info->task)->mm)
  return true;

 if ((__current_thread_info->task)->mm != vma->vm_mm)
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_accessible(struct vm_area_struct *vma)
{
 return vma->vm_flags & (0x00000001 | 0x00000002 | 0x00000004);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_shared_maywrite(vm_flags_t vm_flags)
{
 return (vm_flags & (0x00000008 | 0x00000020)) ==
  (0x00000008 | 0x00000020);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_is_shared_maywrite(struct vm_area_struct *vma)
{
 return is_shared_maywrite(vma->vm_flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max)
{
 return mas_find(&vmi->mas, max - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *vma_next(struct vma_iterator *vmi)
{




 return mas_find(&vmi->mas, (~0UL));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
{
 return mas_next_range(&vmi->mas, (~0UL));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
{
 return mas_prev(&vmi->mas, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
{
 return mas_prev_range(&vmi->mas, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vma_iter_addr(struct vma_iterator *vmi)
{
 return vmi->mas.index;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vma_iter_end(struct vma_iterator *vmi)
{
 return vmi->mas.last + 1;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vma_iter_bulk_alloc(struct vma_iterator *vmi,
          unsigned long count)
{
 return mas_expected_entries(&vmi->mas, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vma_iter_clear_gfp(struct vma_iterator *vmi,
   unsigned long start, unsigned long end, gfp_t gfp)
{
 __mas_set_range(&vmi->mas, start, end - 1);
 mas_store_gfp(&vmi->mas, ((void *)0), gfp);
 if (__builtin_expect(!!(mas_is_err(&vmi->mas)), 0))
  return -12;

 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_iter_free(struct vma_iterator *vmi)
{
 mas_destroy(&vmi->mas);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vma_iter_bulk_store(struct vma_iterator *vmi,
          struct vm_area_struct *vma)
{
 vmi->mas.index = vma->vm_start;
 vmi->mas.last = vma->vm_end - 1;
 mas_store(&vmi->mas, vma);
 if (__builtin_expect(!!(mas_is_err(&vmi->mas)), 0))
  return -12;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_iter_invalidate(struct vma_iterator *vmi)
{
 mas_pause(&vmi->mas);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
{
 mas_set(&vmi->mas, addr);
}
# 1080 "../include/linux/mm.h"
bool vma_is_shmem(struct vm_area_struct *vma);
bool vma_is_anon_shmem(struct vm_area_struct *vma);





int vma_is_stack_for_current(struct vm_area_struct *vma);




struct mmu_gather;
struct inode;
# 1102 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int compound_order(struct page *page)
{
 struct folio *folio = (struct folio *)page;

 if (!((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&folio->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&folio->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&folio->flags))) ? const_test_bit(PG_head, &folio->flags) : arch_test_bit(PG_head, &folio->flags)))
  return 0;
 return folio->_flags_1 & 0xff;
}
# 1120 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int folio_order(const struct folio *folio)
{
 if (!folio_test_large(folio))
  return 0;
 return folio->_flags_1 & 0xff;
}

# 1 "../include/linux/huge_mm.h" 1
# 11 "../include/linux/huge_mm.h"
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
    pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
    struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
void huge_pmd_set_accessed(struct vm_fault *vmf);
int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
    pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
    struct vm_area_struct *vma);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
{
}


vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
      pmd_t *pmd, unsigned long addr, unsigned long next);
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd,
   unsigned long addr);
int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud,
   unsigned long addr);
bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
     unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd);
int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
      pmd_t *pmd, unsigned long addr, pgprot_t newprot,
      unsigned long cp_flags);

vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write);
vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write);

enum transparent_hugepage_flag {
 TRANSPARENT_HUGEPAGE_UNSUPPORTED,
 TRANSPARENT_HUGEPAGE_FLAG,
 TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
 TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
 TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
 TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG,
 TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
 TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG,
 TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG,
};

struct kobject;
struct kobj_attribute;

ssize_t single_hugepage_flag_store(struct kobject *kobj,
       struct kobj_attribute *attr,
       const char *buf, size_t count,
       enum transparent_hugepage_flag flag);
ssize_t single_hugepage_flag_show(struct kobject *kobj,
      struct kobj_attribute *attr, char *buf,
      enum transparent_hugepage_flag flag);
extern struct kobj_attribute shmem_enabled_attr;
extern struct kobj_attribute thpsize_shmem_enabled_attr;
# 435 "../include/linux/huge_mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_test_pmd_mappable(struct folio *folio)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool thp_vma_suitable_order(struct vm_area_struct *vma,
  unsigned long addr, int order)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
  unsigned long addr, unsigned long orders)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
     unsigned long vm_flags,
     unsigned long tva_flags,
     unsigned long orders)
{
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
         unsigned long len, unsigned long pgoff,
         unsigned long flags, vm_flags_t vm_flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
can_split_folio(struct folio *folio, int *pextra_pins)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
  unsigned int new_order)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int split_huge_page(struct page *page)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void deferred_split_folio(struct folio *folio) {}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
  unsigned long address, bool freeze, struct folio *folio) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void split_huge_pmd_address(struct vm_area_struct *vma,
  unsigned long address, bool freeze, struct folio *folio) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void split_huge_pmd_locked(struct vm_area_struct *vma,
      unsigned long address, pmd_t *pmd,
      bool freeze, struct folio *folio) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
      unsigned long addr, pmd_t *pmdp,
      struct folio *folio)
{
 return false;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int hugepage_madvise(struct vm_area_struct *vma,
       unsigned long *vm_flags, int advice)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int madvise_collapse(struct vm_area_struct *vma,
       struct vm_area_struct **prev,
       unsigned long start, unsigned long end)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_adjust_trans_huge(struct vm_area_struct *vma,
      unsigned long start,
      unsigned long end,
      long adjust_next)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_swap_pmd(pmd_t pmd)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
  struct vm_area_struct *vma)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pud_trans_huge_lock(pud_t *pud,
  struct vm_area_struct *vma)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_huge_zero_folio(const struct folio *folio)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_huge_zero_pmd(pmd_t pmd)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_huge_zero_pud(pud_t pud)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_put_huge_zero_folio(struct mm_struct *mm)
{
 return;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *follow_devmap_pmd(struct vm_area_struct *vma,
 unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool thp_migration_supported(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int highest_order(unsigned long orders)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int next_order(unsigned long *orders, int prev)
{
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int split_folio_to_list_to_order(struct folio *folio,
  struct list_head *list, int new_order)
{
 return split_huge_page_to_list_to_order(&folio->page, list, new_order);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int split_folio_to_order(struct folio *folio, int new_order)
{
 return split_folio_to_list_to_order(folio, ((void *)0), new_order);
}
# 1128 "../include/linux/mm.h" 2
# 1145 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int put_page_testzero(struct page *page)
{
 do { if (__builtin_expect(!!(page_ref_count(page) == 0), 0)) { dump_page(page, "VM_BUG_ON_PAGE(" "page_ref_count(page) == 0"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/mm.h", 1147, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0);
 return page_ref_dec_and_test(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_put_testzero(struct folio *folio)
{
 return put_page_testzero(&folio->page);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool get_page_unless_zero(struct page *page)
{
 return page_ref_add_unless(page, 1, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *folio_get_nontail_page(struct page *page)
{
 if (__builtin_expect(!!(!get_page_unless_zero(page)), 0))
  return ((void *)0);
 return (struct folio *)page;
}

extern int page_is_ram(unsigned long pfn);

enum {
 REGION_INTERSECTS,
 REGION_DISJOINT,
 REGION_MIXED,
};

int region_intersects(resource_size_t offset, size_t size, unsigned long flags,
        unsigned long desc);


struct page *vmalloc_to_page(const void *addr);
unsigned long vmalloc_to_pfn(const void *addr);
# 1196 "../include/linux/mm.h"
extern bool is_vmalloc_addr(const void *x);
extern int is_vmalloc_or_module_addr(const void *x);
# 1214 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_entire_mapcount(const struct folio *folio)
{
 do { if (__builtin_expect(!!(!folio_test_large(folio)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "!folio_test_large(folio)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/mm.h", 1216, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0);
 return atomic_read(&folio->_entire_mapcount) + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_large_mapcount(const struct folio *folio)
{
 ({ int __ret_warn = !!(!folio_test_large(folio)); if (__builtin_expect(!!(__ret_warn), 0)) { dump_page(&folio->page, "VM_WARN_ON_FOLIO(" "!folio_test_large(folio)"")"); ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/mm.h", 1222, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } __builtin_expect(!!(__ret_warn), 0); });
 return atomic_read(&folio->_large_mapcount) + 1;
}
# 1246 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_mapcount(const struct folio *folio)
{
 int mapcount;

 if (__builtin_expect(!!(!folio_test_large(folio)), 1)) {
  mapcount = atomic_read(&folio->_mapcount) + 1;

  if (mapcount < PAGE_MAPCOUNT_RESERVE + 1)
   mapcount = 0;
  return mapcount;
 }
 return folio_large_mapcount(folio);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_mapped(const struct folio *folio)
{
 return folio_mapcount(folio) >= 1;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_mapped(const struct page *page)
{
 return folio_mapped((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *virt_to_head_page(const void *x)
{
 struct page *page = (mem_map + ((((((unsigned long)(x) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)));

 return ((typeof(page))_compound_head(page));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *virt_to_folio(const void *x)
{
 struct page *page = (mem_map + ((((((unsigned long)(x) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)));

 return (_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page)));
}

void __folio_put(struct folio *folio);

void put_pages_list(struct list_head *pages);

void split_page(struct page *page, unsigned int order);
void folio_copy(struct folio *dst, struct folio *src);
int folio_mc_copy(struct folio *dst, struct folio *src);

unsigned long nr_free_buffer_pages(void);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long page_size(struct page *page)
{
 return (1UL << 14) << compound_order(page);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int page_shift(struct page *page)
{
 return 14 + compound_order(page);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int thp_order(struct page *page)
{
 ((void)(sizeof(( long)(PageTail(page)))));
 return compound_order(page);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long thp_size(struct page *page)
{
 return (1UL << 14) << thp_order(page);
}
# 1345 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
{
 if (__builtin_expect(!!(vma->vm_flags & 0x00000002), 1))
  pte = pte_mkwrite(pte, vma);
 return pte;
}

vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page);
void set_pte_range(struct vm_fault *vmf, struct folio *folio,
  struct page *page, unsigned int nr, unsigned long addr);

vm_fault_t finish_fault(struct vm_fault *vmf);
# 1432 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool put_devmap_managed_folio_refs(struct folio *folio, int refs)
{
 return false;
}
# 1450 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_get(struct folio *folio)
{
 do { if (__builtin_expect(!!(((unsigned int) folio_ref_count(folio) + 127u <= 127u)), 0)) { dump_page(&folio->page, "VM_BUG_ON_FOLIO(" "((unsigned int) folio_ref_count(folio) + 127u <= 127u)"")"); do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/mm.h", 1452, __func__); }); do { } while (0); panic("BUG!"); } while (0); } } while (0);
 folio_ref_inc(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void get_page(struct page *page)
{
 folio_get((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool try_get_page(struct page *page)
{
 page = ((typeof(page))_compound_head(page));
 if (({ bool __ret_do_once = !!(page_ref_count(page) <= 0); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/mm.h", 1464, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return false;
 page_ref_inc(page);
 return true;
}
# 1483 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_put(struct folio *folio)
{
 if (folio_put_testzero(folio))
  __folio_put(folio);
}
# 1503 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_put_refs(struct folio *folio, int refs)
{
 if (folio_ref_sub_and_test(folio, refs))
  __folio_put(folio);
}

void folios_put_refs(struct folio_batch *folios, unsigned int *refs);
# 1522 "../include/linux/mm.h"
typedef union {
 struct page **pages;
 struct folio **folios;
 struct encoded_page **encoded_pages;
} release_pages_arg __attribute__ ((__transparent_union__));

void release_pages(release_pages_arg, int nr);
# 1543 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folios_put(struct folio_batch *folios)
{
 folios_put_refs(folios, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_page(struct page *page)
{
 struct folio *folio = (_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page)));





 if (put_devmap_managed_folio_refs(folio, 1))
  return;
 folio_put(folio);
}
# 1593 "../include/linux/mm.h"
void unpin_user_page(struct page *page);
void unpin_folio(struct folio *folio);
void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
     bool make_dirty);
void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
          bool make_dirty);
void unpin_user_pages(struct page **pages, unsigned long npages);
void unpin_folios(struct folio **folios, unsigned long nfolios);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_cow_mapping(vm_flags_t flags)
{
 return (flags & (0x00000008 | 0x00000020)) == 0x00000020;
}
# 1634 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_zone_id(struct page *page)
{
 return (page->flags >> ((((((sizeof(unsigned long)*8) - 0) - 0) < ((((sizeof(unsigned long)*8) - 0) - 0) - 1)) ? (((sizeof(unsigned long)*8) - 0) - 0) : ((((sizeof(unsigned long)*8) - 0) - 0) - 1)) * ((0 + 1) != 0))) & ((1UL << (0 + 1)) - 1);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int page_to_nid(const struct page *page)
{
 return (({ ((void)(sizeof(( long)(PagePoisoned(page))))); page; })->flags >> ((((sizeof(unsigned long)*8) - 0) - 0) * (0 != 0))) & ((1UL << 0) - 1);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_nid(const struct folio *folio)
{
 return page_to_nid(&folio->page);
}
# 1749 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
{
 return folio_nid(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_xchg_access_time(struct folio *folio, int time)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int folio_last_cpupid(struct folio *folio)
{
 return folio_nid(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpupid_to_nid(int cpupid)
{
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpupid_to_pid(int cpupid)
{
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpupid_to_cpu(int cpupid)
{
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpu_pid_to_cpupid(int nid, int pid)
{
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpupid_pid_unset(int cpupid)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_cpupid_reset_last(struct page *page)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpupid_match_pid(struct task_struct *task, int cpupid)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vma_set_access_pid_bit(struct vm_area_struct *vma)
{
}
# 1847 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 page_kasan_tag(const struct page *page)
{
 return 0xff;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_kasan_tag_set(struct page *page, u8 tag) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_kasan_tag_reset(struct page *page) { }



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct zone *page_zone(const struct page *page)
{
 return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pg_data_t *page_pgdat(const struct page *page)
{
 return NODE_DATA(page_to_nid(page));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct zone *folio_zone(const struct folio *folio)
{
 return page_zone(&folio->page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pg_data_t *folio_pgdat(const struct folio *folio)
{
 return page_pgdat(&folio->page);
}
# 1899 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long folio_pfn(struct folio *folio)
{
 return ((unsigned long)((&folio->page) - mem_map) + (__phys_offset >> 14));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *pfn_folio(unsigned long pfn)
{
 return (_Generic(((mem_map + ((pfn) - (__phys_offset >> 14)))), const struct page *: (const struct folio *)_compound_head((mem_map + ((pfn) - (__phys_offset >> 14)))), struct page *: (struct folio *)_compound_head((mem_map + ((pfn) - (__phys_offset >> 14))))));
}
# 1934 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_maybe_dma_pinned(struct folio *folio)
{
 if (folio_test_large(folio))
  return atomic_read(&folio->_pincount) > 0;
# 1947 "../include/linux/mm.h"
 return ((unsigned int)folio_ref_count(folio)) >=
  (1U << 10);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_needs_cow_for_dma(struct vm_area_struct *vma,
       struct folio *folio)
{
 do { if (__builtin_expect(!!(!(({ unsigned __seq = _Generic(*(&vma->vm_mm->write_protect_seq), seqcount_t: __seqprop_sequence, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_sequence, seqcount_spinlock_t: __seqprop_spinlock_sequence, seqcount_rwlock_t: __seqprop_rwlock_sequence, seqcount_mutex_t: __seqprop_mutex_sequence)(&vma->vm_mm->write_protect_seq); __asm__ __volatile__("": : :"memory"); kcsan_atomic_next(1000); __seq; }) & 1)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/mm.h", 1960, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 if (!((__builtin_constant_p(27) && __builtin_constant_p((uintptr_t)(&vma->vm_mm->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&vma->vm_mm->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&vma->vm_mm->flags))) ? const_test_bit(27, &vma->vm_mm->flags) : arch_test_bit(27, &vma->vm_mm->flags)))
  return false;

 return folio_maybe_dma_pinned(folio);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_zero_page(const struct page *page)
{
 return is_zero_pfn(((unsigned long)((page) - mem_map) + (__phys_offset >> 14)));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_zero_folio(const struct folio *folio)
{
 return is_zero_page(&folio->page);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_longterm_pinnable(struct folio *folio)
{

 int mt = get_pfnblock_flags_mask(&folio->page, folio_pfn(folio), ((1UL << 3) - 1));

 if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE)
  return false;


 if (is_zero_folio(folio))
  return true;


 if (folio_is_device_coherent(folio))
  return false;


 return !folio_is_zone_movable(folio);

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_zone(struct page *page, enum zone_type zone)
{
 page->flags &= ~(((1UL << 1) - 1) << (((((sizeof(unsigned long)*8) - 0) - 0) - 1) * (1 != 0)));
 page->flags |= (zone & ((1UL << 1) - 1)) << (((((sizeof(unsigned long)*8) - 0) - 0) - 1) * (1 != 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_node(struct page *page, unsigned long node)
{
 page->flags &= ~(((1UL << 0) - 1) << ((((sizeof(unsigned long)*8) - 0) - 0) * (0 != 0)));
 page->flags |= (node & ((1UL << 0) - 1)) << ((((sizeof(unsigned long)*8) - 0) - 0) * (0 != 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_links(struct page *page, enum zone_type zone,
 unsigned long node, unsigned long pfn)
{
 set_page_zone(page, zone);
 set_page_node(page, node);



}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long folio_nr_pages(const struct folio *folio)
{
 if (!folio_test_large(folio))
  return 1;



 return 1L << (folio->_flags_1 & 0xff);

}
# 2070 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long compound_nr(struct page *page)
{
 struct folio *folio = (struct folio *)page;

 if (!((__builtin_constant_p(PG_head) && __builtin_constant_p((uintptr_t)(&folio->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&folio->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&folio->flags))) ? const_test_bit(PG_head, &folio->flags) : arch_test_bit(PG_head, &folio->flags)))
  return 1;



 return 1L << (folio->_flags_1 & 0xff);

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int thp_nr_pages(struct page *page)
{
 return folio_nr_pages((struct folio *)page);
}
# 2106 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *folio_next(struct folio *folio)
{
 return (struct folio *)((&(folio)->page) + (folio_nr_pages(folio)));
}
# 2123 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int folio_shift(const struct folio *folio)
{
 return 14 + folio_order(folio);
}
# 2136 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t folio_size(const struct folio *folio)
{
 return (1UL << 14) << folio_order(folio);
}
# 2182 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_likely_mapped_shared(struct folio *folio)
{
 int mapcount = folio_mapcount(folio);


 if (!folio_test_large(folio) || __builtin_expect(!!(folio_test_hugetlb(folio)), 0))
  return mapcount > 1;


 if (mapcount <= 1)
  return false;


 if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
  return true;


 return atomic_read(&folio->_mapcount) > 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_make_page_accessible(struct page *page)
{
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_make_folio_accessible(struct folio *folio)
{
 int ret;
 long i, nr = folio_nr_pages(folio);

 for (i = 0; i < nr; i++) {
  ret = arch_make_page_accessible(((&(folio)->page) + (i)));
  if (ret)
   break;
 }

 return ret;
}





# 1 "../include/linux/vmstat.h" 1







# 1 "../include/linux/vm_event_item.h" 1
# 32 "../include/linux/vm_event_item.h"
enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
  PGALLOC_NORMAL, PGALLOC_MOVABLE,
  ALLOCSTALL_NORMAL, ALLOCSTALL_MOVABLE,
  PGSCAN_SKIP_NORMAL, PGSCAN_SKIP_MOVABLE,
  PGFREE, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE,
  PGFAULT, PGMAJFAULT,
  PGLAZYFREED,
  PGREFILL,
  PGREUSE,
  PGSTEAL_KSWAPD,
  PGSTEAL_DIRECT,
  PGSTEAL_KHUGEPAGED,
  PGSCAN_KSWAPD,
  PGSCAN_DIRECT,
  PGSCAN_KHUGEPAGED,
  PGSCAN_DIRECT_THROTTLE,
  PGSCAN_ANON,
  PGSCAN_FILE,
  PGSTEAL_ANON,
  PGSTEAL_FILE,



  PGINODESTEAL, SLABS_SCANNED, KSWAPD_INODESTEAL,
  KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY,
  PAGEOUTRUN, PGROTATED,
  DROP_PAGECACHE, DROP_SLAB,
  OOM_KILL,
# 68 "../include/linux/vm_event_item.h"
  PGMIGRATE_SUCCESS, PGMIGRATE_FAIL,
  THP_MIGRATION_SUCCESS,
  THP_MIGRATION_FAIL,
  THP_MIGRATION_SPLIT,


  COMPACTMIGRATE_SCANNED, COMPACTFREE_SCANNED,
  COMPACTISOLATED,
  COMPACTSTALL, COMPACTFAIL, COMPACTSUCCESS,
  KCOMPACTD_WAKE,
  KCOMPACTD_MIGRATE_SCANNED, KCOMPACTD_FREE_SCANNED,





  CMA_ALLOC_SUCCESS,
  CMA_ALLOC_FAIL,

  UNEVICTABLE_PGCULLED,
  UNEVICTABLE_PGSCANNED,
  UNEVICTABLE_PGRESCUED,
  UNEVICTABLE_PGMLOCKED,
  UNEVICTABLE_PGMUNLOCKED,
  UNEVICTABLE_PGCLEARED,
  UNEVICTABLE_PGSTRANDED,
# 157 "../include/linux/vm_event_item.h"
  NR_VM_EVENT_ITEMS
};
# 9 "../include/linux/vmstat.h" 2

# 1 "../include/linux/static_key.h" 1
# 11 "../include/linux/vmstat.h" 2


extern int sysctl_stat_interval;
# 24 "../include/linux/vmstat.h"
struct reclaim_stat {
 unsigned nr_dirty;
 unsigned nr_unqueued_dirty;
 unsigned nr_congested;
 unsigned nr_writeback;
 unsigned nr_immediate;
 unsigned nr_pageout;
 unsigned nr_activate[2];
 unsigned nr_ref_keep;
 unsigned nr_unmap_fail;
 unsigned nr_lazyfree_fail;
};

enum writeback_stat_item {
 NR_DIRTY_THRESHOLD,
 NR_DIRTY_BG_THRESHOLD,
 NR_VM_WRITEBACK_STAT_ITEMS,
};
# 54 "../include/linux/vmstat.h"
struct vm_event_state {
 unsigned long event[NR_VM_EVENT_ITEMS];
};

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_vm_event_states; extern __attribute__((section(".data" ""))) __typeof__(struct vm_event_state) vm_event_states;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __count_vm_event(enum vm_event_item item)
{
 do { do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(vm_event_states.event[item])) { case 1: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0);break; case 2: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0);break; case 4: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0);break; case 8: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void count_vm_event(enum vm_event_item item)
{
 do { do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(vm_event_states.event[item])) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __count_vm_events(enum vm_event_item item, long delta)
{
 do { do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(vm_event_states.event[item])) { case 1: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0);break; case 2: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0);break; case 4: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0);break; case 8: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void count_vm_events(enum vm_event_item item, long delta)
{
 do { do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(vm_event_states.event[item])) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(vm_event_states.event[item])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(vm_event_states.event[item]))) *)(&(vm_event_states.event[item])); }); }) += delta; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

extern void all_vm_events(unsigned long *);

extern void vm_events_fold_cpu(int cpu);
# 140 "../include/linux/vmstat.h"
extern atomic_long_t vm_zone_stat[NR_VM_ZONE_STAT_ITEMS];
extern atomic_long_t vm_node_stat[NR_VM_NODE_STAT_ITEMS];
extern atomic_long_t vm_numa_event[0];
# 165 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_page_state_add(long x, struct zone *zone,
     enum zone_stat_item item)
{
 atomic_long_add(x, &zone->vm_stat[item]);
 atomic_long_add(x, &vm_zone_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_page_state_add(long x, struct pglist_data *pgdat,
     enum node_stat_item item)
{
 atomic_long_add(x, &pgdat->vm_stat[item]);
 atomic_long_add(x, &vm_node_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long global_zone_page_state(enum zone_stat_item item)
{
 long x = atomic_long_read(&vm_zone_stat[item]);




 return x;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long global_node_page_state_pages(enum node_stat_item item)
{
 long x = atomic_long_read(&vm_node_stat[item]);




 return x;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long global_node_page_state(enum node_stat_item item)
{
 (void)({ bool __ret_do_once = !!(vmstat_item_in_bytes(item)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/vmstat.h", 202, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 return global_node_page_state_pages(item);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long zone_page_state(struct zone *zone,
     enum zone_stat_item item)
{
 long x = atomic_long_read(&zone->vm_stat[item]);




 return x;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long zone_page_state_snapshot(struct zone *zone,
     enum zone_stat_item item)
{
 long x = atomic_long_read(&zone->vm_stat[item]);
# 237 "../include/linux/vmstat.h"
 return x;
}
# 270 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fold_vm_numa_events(void)
{
}
# 319 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mod_zone_page_state(struct zone *zone,
   enum zone_stat_item item, long delta)
{
 zone_page_state_add(delta, zone, item);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mod_node_page_state(struct pglist_data *pgdat,
   enum node_stat_item item, int delta)
{
 if (vmstat_item_in_bytes(item)) {






  (void)({ bool __ret_do_once = !!(delta & ((1UL << 14) - 1)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/vmstat.h", 335, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  delta >>= 14;
 }

 node_page_state_add(delta, pgdat, item);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
{
 atomic_long_inc(&zone->vm_stat[item]);
 atomic_long_inc(&vm_zone_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
{
 atomic_long_inc(&pgdat->vm_stat[item]);
 atomic_long_inc(&vm_node_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
{
 atomic_long_dec(&zone->vm_stat[item]);
 atomic_long_dec(&vm_zone_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
{
 atomic_long_dec(&pgdat->vm_stat[item]);
 atomic_long_dec(&vm_node_stat[item]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inc_zone_page_state(struct page *page,
   enum zone_stat_item item)
{
 __inc_zone_state(page_zone(page), item);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inc_node_page_state(struct page *page,
   enum node_stat_item item)
{
 __inc_node_state(page_pgdat(page), item);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dec_zone_page_state(struct page *page,
   enum zone_stat_item item)
{
 __dec_zone_state(page_zone(page), item);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dec_node_page_state(struct page *page,
   enum node_stat_item item)
{
 __dec_node_state(page_pgdat(page), item);
}
# 410 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refresh_zone_stat_thresholds(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_vm_stats_fold(int cpu) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void quiet_vmstat(void) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void drain_zonestat(struct zone *zone,
   struct per_cpu_zonestat *pzstats) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __zone_stat_mod_folio(struct folio *folio,
  enum zone_stat_item item, long nr)
{
 __mod_zone_page_state(folio_zone(folio), item, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __zone_stat_add_folio(struct folio *folio,
  enum zone_stat_item item)
{
 __mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __zone_stat_sub_folio(struct folio *folio,
  enum zone_stat_item item)
{
 __mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_stat_mod_folio(struct folio *folio,
  enum zone_stat_item item, long nr)
{
 __mod_zone_page_state(folio_zone(folio), item, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_stat_add_folio(struct folio *folio,
  enum zone_stat_item item)
{
 __mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zone_stat_sub_folio(struct folio *folio,
  enum zone_stat_item item)
{
 __mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __node_stat_mod_folio(struct folio *folio,
  enum node_stat_item item, long nr)
{
 __mod_node_page_state(folio_pgdat(folio), item, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __node_stat_add_folio(struct folio *folio,
  enum node_stat_item item)
{
 __mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __node_stat_sub_folio(struct folio *folio,
  enum node_stat_item item)
{
 __mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_stat_mod_folio(struct folio *folio,
  enum node_stat_item item, long nr)
{
 __mod_node_page_state(folio_pgdat(folio), item, nr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_stat_add_folio(struct folio *folio,
  enum node_stat_item item)
{
 __mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_stat_sub_folio(struct folio *folio,
  enum node_stat_item item)
{
 __mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio));
}

extern const char * const vmstat_text[];

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *zone_stat_name(enum zone_stat_item item)
{
 return vmstat_text[item];
}
# 505 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *node_stat_name(enum node_stat_item item)
{
 return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
      0 +
      item];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *lru_list_name(enum lru_list lru)
{
 return node_stat_name(NR_LRU_BASE + lru) + 3;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *writeback_stat_name(enum writeback_stat_item item)
{
 return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
      0 +
      NR_VM_NODE_STAT_ITEMS +
      item];
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *vm_event_name(enum vm_event_item item)
{
 return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
      0 +
      NR_VM_NODE_STAT_ITEMS +
      NR_VM_WRITEBACK_STAT_ITEMS +
      item];
}
# 572 "../include/linux/vmstat.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mod_lruvec_state(struct lruvec *lruvec,
          enum node_stat_item idx, int val)
{
 __mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mod_lruvec_state(struct lruvec *lruvec,
        enum node_stat_item idx, int val)
{
 __mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __lruvec_stat_mod_folio(struct folio *folio,
      enum node_stat_item idx, int val)
{
 __mod_node_page_state(folio_pgdat(folio), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lruvec_stat_mod_folio(struct folio *folio,
      enum node_stat_item idx, int val)
{
 __mod_node_page_state(folio_pgdat(folio), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mod_lruvec_page_state(struct page *page,
      enum node_stat_item idx, int val)
{
 __mod_node_page_state(page_pgdat(page), idx, val);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __lruvec_stat_add_folio(struct folio *folio,
        enum node_stat_item idx)
{
 __lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __lruvec_stat_sub_folio(struct folio *folio,
        enum node_stat_item idx)
{
 __lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lruvec_stat_add_folio(struct folio *folio,
      enum node_stat_item idx)
{
 lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lruvec_stat_sub_folio(struct folio *folio,
      enum node_stat_item idx)
{
 lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio));
}

void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) mod_node_early_perpage_metadata(int nid, long delta);
void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) store_early_perpage_metadata(void);
# 2229 "../include/linux/mm.h" 2
# 2252 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void *lowmem_page_address(const struct page *page)
{
 return ((void *)((unsigned long)((((unsigned long)((page) - mem_map) + (__phys_offset >> 14)) << 14)) - __phys_offset + (0xc0000000UL)));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *folio_address(const struct folio *folio)
{
 return lowmem_page_address(&folio->page);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_is_pfmemalloc(const struct page *page)
{





 return (uintptr_t)page->lru.next & ((((1UL))) << (1));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_is_pfmemalloc(const struct folio *folio)
{





 return (uintptr_t)folio->lru.next & ((((1UL))) << (1));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_page_pfmemalloc(struct page *page)
{
 page->lru.next = (void *)((((1UL))) << (1));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_page_pfmemalloc(struct page *page)
{
 page->lru.next = ((void *)0);
}




extern void pagefault_out_of_memory(void);
# 2324 "../include/linux/mm.h"
struct zap_details {
 struct folio *single_folio;
 bool even_cows;
 zap_flags_t zap_flags;
};
# 2350 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_mm_cid_before_execve(struct task_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_mm_cid_after_execve(struct task_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_mm_cid_fork(struct task_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sched_mm_cid_exit_signals(struct task_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int task_mm_cid(struct task_struct *t)
{





 return 0;
}



extern bool can_do_mlock(void);



extern int user_shm_lock(size_t, struct ucounts *);
extern void user_shm_unlock(size_t, struct ucounts *);

struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
        pte_t pte);
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
        pte_t pte);
struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
      unsigned long addr, pmd_t pmd);
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
    pmd_t pmd);

void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
    unsigned long size);
void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
      unsigned long size, struct zap_details *details);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zap_vma_pages(struct vm_area_struct *vma)
{
 zap_page_range_single(vma, vma->vm_start,
         vma->vm_end - vma->vm_start, ((void *)0));
}
void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
  struct vm_area_struct *start_vma, unsigned long start,
  unsigned long end, unsigned long tree_end, bool mm_wr_locked);

struct mmu_notifier_range;

void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
  unsigned long end, unsigned long floor, unsigned long ceiling);
int
copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
int follow_pte(struct vm_area_struct *vma, unsigned long address,
        pte_t **ptepp, spinlock_t **ptlp);
int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
   void *buf, int len, int write);

extern void truncate_pagecache(struct inode *inode, loff_t new);
extern void truncate_setsize(struct inode *inode, loff_t newsize);
void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
int generic_error_remove_folio(struct address_space *mapping,
  struct folio *folio);

struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
  unsigned long address, struct pt_regs *regs);


extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
      unsigned long address, unsigned int flags,
      struct pt_regs *regs);
extern int fixup_user_fault(struct mm_struct *mm,
       unsigned long address, unsigned int fault_flags,
       bool *unlocked);
void unmap_mapping_pages(struct address_space *mapping,
  unsigned long start, unsigned long nr, bool even_cows);
void unmap_mapping_range(struct address_space *mapping,
  loff_t const holebegin, loff_t const holelen, int even_cows);
# 2449 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unmap_shared_mapping_range(struct address_space *mapping,
  loff_t const holebegin, loff_t const holelen)
{
 unmap_mapping_range(mapping, holebegin, holelen, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *vma_lookup(struct mm_struct *mm,
      unsigned long addr);

extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
  void *buf, int len, unsigned int gup_flags);
extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
  void *buf, int len, unsigned int gup_flags);

long get_user_pages_remote(struct mm_struct *mm,
      unsigned long start, unsigned long nr_pages,
      unsigned int gup_flags, struct page **pages,
      int *locked);
long pin_user_pages_remote(struct mm_struct *mm,
      unsigned long start, unsigned long nr_pages,
      unsigned int gup_flags, struct page **pages,
      int *locked);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *get_user_page_vma_remote(struct mm_struct *mm,
          unsigned long addr,
          int gup_flags,
          struct vm_area_struct **vmap)
{
 struct page *page;
 struct vm_area_struct *vma;
 int got;

 if (({ bool __ret_do_once = !!(__builtin_expect(!!(gup_flags & FOLL_NOWAIT), 0)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/mm.h", 2484, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return ERR_PTR(-22);

 got = get_user_pages_remote(mm, addr, 1, gup_flags, &page, ((void *)0));

 if (got < 0)
  return ERR_PTR(got);

 vma = vma_lookup(mm, addr);
 if (({ bool __ret_do_once = !!(!vma); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/mm.h", 2493, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); })) {
  put_page(page);
  return ERR_PTR(-22);
 }

 *vmap = vma;
 return page;
}

long get_user_pages(unsigned long start, unsigned long nr_pages,
      unsigned int gup_flags, struct page **pages);
long pin_user_pages(unsigned long start, unsigned long nr_pages,
      unsigned int gup_flags, struct page **pages);
long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
      struct page **pages, unsigned int gup_flags);
long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
      struct page **pages, unsigned int gup_flags);
long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
        struct folio **folios, unsigned int max_folios,
        unsigned long *offset);

int get_user_pages_fast(unsigned long start, int nr_pages,
   unsigned int gup_flags, struct page **pages);
int pin_user_pages_fast(unsigned long start, int nr_pages,
   unsigned int gup_flags, struct page **pages);
void folio_add_pin(struct folio *folio);

int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc);
int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
   struct task_struct *task, bool bypass_rlim);

struct kvec;
struct page *get_dump_page(unsigned long addr);

bool folio_mark_dirty(struct folio *folio);
bool set_page_dirty(struct page *page);
int set_page_dirty_lock(struct page *page);

int get_cmdline(struct task_struct *task, char *buffer, int buflen);

extern unsigned long move_page_tables(struct vm_area_struct *vma,
  unsigned long old_addr, struct vm_area_struct *new_vma,
  unsigned long new_addr, unsigned long len,
  bool need_rmap_locks, bool for_stack);
# 2558 "../include/linux/mm.h"
bool vma_needs_dirty_tracking(struct vm_area_struct *vma);
bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
{






 if (vma->vm_flags & 0x00000008)
  return vma_wants_writenotify(vma, vma->vm_page_prot);
 return !!(vma->vm_flags & 0x00000002);

}
bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
        pte_t pte);
extern long change_protection(struct mmu_gather *tlb,
         struct vm_area_struct *vma, unsigned long start,
         unsigned long end, unsigned long cp_flags);
extern int mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
   struct vm_area_struct *vma, struct vm_area_struct **pprev,
   unsigned long start, unsigned long end, unsigned long newflags);




int get_user_pages_fast_only(unsigned long start, int nr_pages,
        unsigned int gup_flags, struct page **pages);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool get_user_page_fast_only(unsigned long addr,
   unsigned int gup_flags, struct page **pagep)
{
 return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_mm_counter(struct mm_struct *mm, int member)
{
 return percpu_counter_read_positive(&mm->rss_stat[member]);
}

void mm_trace_rss_stat(struct mm_struct *mm, int member);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void add_mm_counter(struct mm_struct *mm, int member, long value)
{
 percpu_counter_add(&mm->rss_stat[member], value);

 mm_trace_rss_stat(mm, member);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inc_mm_counter(struct mm_struct *mm, int member)
{
 percpu_counter_inc(&mm->rss_stat[member]);

 mm_trace_rss_stat(mm, member);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dec_mm_counter(struct mm_struct *mm, int member)
{
 percpu_counter_dec(&mm->rss_stat[member]);

 mm_trace_rss_stat(mm, member);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mm_counter_file(struct folio *folio)
{
 if (folio_test_swapbacked(folio))
  return MM_SHMEMPAGES;
 return MM_FILEPAGES;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mm_counter(struct folio *folio)
{
 if (folio_test_anon(folio))
  return MM_ANONPAGES;
 return mm_counter_file(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_mm_rss(struct mm_struct *mm)
{
 return get_mm_counter(mm, MM_FILEPAGES) +
  get_mm_counter(mm, MM_ANONPAGES) +
  get_mm_counter(mm, MM_SHMEMPAGES);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_mm_hiwater_rss(struct mm_struct *mm)
{
 return __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((mm->hiwater_rss) - (get_mm_rss(mm))) * 0l)) : (int *)8))), ((mm->hiwater_rss) > (get_mm_rss(mm)) ? (mm->hiwater_rss) : (get_mm_rss(mm))), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->hiwater_rss))(-1)) < ( typeof(mm->hiwater_rss))1)) * 0l)) : (int *)8))), (((typeof(mm->hiwater_rss))(-1)) < ( typeof(mm->hiwater_rss))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(get_mm_rss(mm)))(-1)) < ( typeof(get_mm_rss(mm)))1)) * 0l)) : (int *)8))), (((typeof(get_mm_rss(mm)))(-1)) < ( typeof(get_mm_rss(mm)))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((mm->hiwater_rss) + 0))(-1)) < ( typeof((mm->hiwater_rss) + 0))1)) * 0l)) : (int *)8))), (((typeof((mm->hiwater_rss) + 0))(-1)) < ( typeof((mm->hiwater_rss) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((get_mm_rss(mm)) + 0))(-1)) < ( typeof((get_mm_rss(mm)) + 0))1)) * 0l)) : (int *)8))), (((typeof((get_mm_rss(mm)) + 0))(-1)) < ( typeof((get_mm_rss(mm)) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(mm->hiwater_rss) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->hiwater_rss))(-1)) < ( typeof(mm->hiwater_rss))1)) * 0l)) : (int *)8))), (((typeof(mm->hiwater_rss))(-1)) < ( typeof(mm->hiwater_rss))1), 0), mm->hiwater_rss, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(get_mm_rss(mm)) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(get_mm_rss(mm)))(-1)) < ( typeof(get_mm_rss(mm)))1)) * 0l)) : (int *)8))), (((typeof(get_mm_rss(mm)))(-1)) < ( typeof(get_mm_rss(mm)))1), 0), get_mm_rss(mm), -1) >= 0)), "max" "(" "mm->hiwater_rss" ", " "get_mm_rss(mm)" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_244 = (mm->hiwater_rss); __auto_type __UNIQUE_ID_y_245 = (get_mm_rss(mm)); ((__UNIQUE_ID_x_244) > (__UNIQUE_ID_y_245) ? (__UNIQUE_ID_x_244) : (__UNIQUE_ID_y_245)); }); }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_mm_hiwater_vm(struct mm_struct *mm)
{
 return __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((mm->hiwater_vm) - (mm->total_vm)) * 0l)) : (int *)8))), ((mm->hiwater_vm) > (mm->total_vm) ? (mm->hiwater_vm) : (mm->total_vm)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->hiwater_vm))(-1)) < ( typeof(mm->hiwater_vm))1)) * 0l)) : (int *)8))), (((typeof(mm->hiwater_vm))(-1)) < ( typeof(mm->hiwater_vm))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->total_vm))(-1)) < ( typeof(mm->total_vm))1)) * 0l)) : (int *)8))), (((typeof(mm->total_vm))(-1)) < ( typeof(mm->total_vm))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((mm->hiwater_vm) + 0))(-1)) < ( typeof((mm->hiwater_vm) + 0))1)) * 0l)) : (int *)8))), (((typeof((mm->hiwater_vm) + 0))(-1)) < ( typeof((mm->hiwater_vm) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((mm->total_vm) + 0))(-1)) < ( typeof((mm->total_vm) + 0))1)) * 0l)) : (int *)8))), (((typeof((mm->total_vm) + 0))(-1)) < ( typeof((mm->total_vm) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(mm->hiwater_vm) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->hiwater_vm))(-1)) < ( typeof(mm->hiwater_vm))1)) * 0l)) : (int *)8))), (((typeof(mm->hiwater_vm))(-1)) < ( typeof(mm->hiwater_vm))1), 0), mm->hiwater_vm, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(mm->total_vm) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(mm->total_vm))(-1)) < ( typeof(mm->total_vm))1)) * 0l)) : (int *)8))), (((typeof(mm->total_vm))(-1)) < ( typeof(mm->total_vm))1), 0), mm->total_vm, -1) >= 0)), "max" "(" "mm->hiwater_vm" ", " "mm->total_vm" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_246 = (mm->hiwater_vm); __auto_type __UNIQUE_ID_y_247 = (mm->total_vm); ((__UNIQUE_ID_x_246) > (__UNIQUE_ID_y_247) ? (__UNIQUE_ID_x_246) : (__UNIQUE_ID_y_247)); }); }));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_hiwater_rss(struct mm_struct *mm)
{
 unsigned long _rss = get_mm_rss(mm);

 if ((mm)->hiwater_rss < _rss)
  (mm)->hiwater_rss = _rss;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void update_hiwater_vm(struct mm_struct *mm)
{
 if (mm->hiwater_vm < mm->total_vm)
  mm->hiwater_vm = mm->total_vm;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reset_mm_hiwater_rss(struct mm_struct *mm)
{
 mm->hiwater_rss = get_mm_rss(mm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void setmax_mm_hiwater_rss(unsigned long *maxrss,
      struct mm_struct *mm)
{
 unsigned long hiwater_rss = get_mm_hiwater_rss(mm);

 if (*maxrss < hiwater_rss)
  *maxrss = hiwater_rss;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_special(pte_t pte)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t pte_mkspecial(pte_t pte)
{
 return pte;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pte_devmap(pte_t pte)
{
 return 0;
}


extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
          spinlock_t **ptl);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr,
        spinlock_t **ptl)
{
 pte_t *ptep;
 (ptep = __get_locked_pte(mm, addr, ptl));
 return ptep;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd,
      unsigned long address)
{
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __pud_alloc(struct mm_struct *mm, p4d_t *p4d,
      unsigned long address)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_inc_nr_puds(struct mm_struct *mm) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_dec_nr_puds(struct mm_struct *mm) {}
# 2751 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __pmd_alloc(struct mm_struct *mm, pud_t *pud,
      unsigned long address)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_inc_nr_pmds(struct mm_struct *mm) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_dec_nr_pmds(struct mm_struct *mm) {}
# 2779 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_pgtables_bytes_init(struct mm_struct *mm)
{
 atomic_long_set(&mm->pgtables_bytes, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long mm_pgtables_bytes(const struct mm_struct *mm)
{
 return atomic_long_read(&mm->pgtables_bytes);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_inc_nr_ptes(struct mm_struct *mm)
{
 atomic_long_add(256 * sizeof(pte_t), &mm->pgtables_bytes);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_dec_nr_ptes(struct mm_struct *mm)
{
 atomic_long_sub(256 * sizeof(pte_t), &mm->pgtables_bytes);
}
# 2810 "../include/linux/mm.h"
int __pte_alloc(struct mm_struct *mm, pmd_t *pmd);
int __pte_alloc_kernel(pmd_t *pmd);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) p4d_t *p4d_alloc(struct mm_struct *mm, pgd_t *pgd,
  unsigned long address)
{
 return (__builtin_expect(!!(pgd_none(*pgd)), 0) && __p4d_alloc(mm, pgd, address)) ?
  ((void *)0) : p4d_offset(pgd, address);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pud_t *pud_alloc(struct mm_struct *mm, p4d_t *p4d,
  unsigned long address)
{
 return (__builtin_expect(!!(p4d_none(*p4d)), 0) && __pud_alloc(mm, p4d, address)) ?
  ((void *)0) : pud_offset(p4d, address);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
{
 return (__builtin_expect(!!(pud_none(*pud)), 0) && __pmd_alloc(mm, pud, address))?
  ((void *)0): pmd_offset(pud, address);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ptdesc *virt_to_ptdesc(const void *x)
{
 return (_Generic(((mem_map + ((((((unsigned long)(x) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)))), const struct page *: (const struct ptdesc *)((mem_map + ((((((unsigned long)(x) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)))), struct page *: (struct ptdesc *)((mem_map + ((((((unsigned long)(x) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14))))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ptdesc_to_virt(const struct ptdesc *pt)
{
 return ((void *)((unsigned long)((((unsigned long)(((_Generic((pt), const struct ptdesc *: (const struct page *)(pt), struct ptdesc *: (struct page *)(pt)))) - mem_map) + (__phys_offset >> 14)) << 14)) - __phys_offset + (0xc0000000UL)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ptdesc_address(const struct ptdesc *pt)
{
 return folio_address((_Generic((pt), const struct ptdesc *: (const struct folio *)(pt), struct ptdesc *: (struct folio *)(pt))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pagetable_is_reserved(struct ptdesc *pt)
{
 return folio_test_reserved((_Generic((pt), const struct ptdesc *: (const struct folio *)(pt), struct ptdesc *: (struct folio *)(pt))));
}
# 2866 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ptdesc *pagetable_alloc_noprof(gfp_t gfp, unsigned int order)
{
 struct page *page = alloc_pages_noprof(gfp | (( gfp_t)((((1UL))) << (___GFP_COMP_BIT))), order);

 return (_Generic((page), const struct page *: (const struct ptdesc *)(page), struct page *: (struct ptdesc *)(page)));
}
# 2881 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagetable_free(struct ptdesc *pt)
{
 struct page *page = (_Generic((pt), const struct ptdesc *: (const struct page *)(pt), struct ptdesc *: (struct page *)(pt)));

 __free_pages(page, compound_order(page));
}
# 2943 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
{
 return &mm->page_table_lock;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptlock_cache_init(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ptlock_init(struct ptdesc *ptdesc) { return true; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ptlock_free(struct ptdesc *ptdesc) {}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pagetable_pte_ctor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 if (!ptlock_init(ptdesc))
  return false;
 __folio_set_pgtable(folio);
 lruvec_stat_add_folio(folio, NR_PAGETABLE);
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagetable_pte_dtor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 ptlock_free(ptdesc);
 __folio_clear_pgtable(folio);
 lruvec_stat_sub_folio(folio, NR_PAGETABLE);
}

pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr)
{
 return __pte_offset_map(pmd, addr, ((void *)0));
}

pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
   unsigned long addr, spinlock_t **ptlp);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
   unsigned long addr, spinlock_t **ptlp)
{
 pte_t *pte;

 (pte = __pte_offset_map_lock(mm, pmd, addr, ptlp));
 return pte;
}

pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd,
   unsigned long addr, spinlock_t **ptlp);
# 3048 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
{
 return &mm->page_table_lock;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pmd_ptlock_free(struct ptdesc *ptdesc) {}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
{
 spinlock_t *ptl = pmd_lockptr(mm, pmd);
 spin_lock(ptl);
 return ptl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pagetable_pmd_ctor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 if (!pmd_ptlock_init(ptdesc))
  return false;
 __folio_set_pgtable(folio);
 lruvec_stat_add_folio(folio, NR_PAGETABLE);
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagetable_pmd_dtor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 pmd_ptlock_free(ptdesc);
 __folio_clear_pgtable(folio);
 lruvec_stat_sub_folio(folio, NR_PAGETABLE);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
{
 return &mm->page_table_lock;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
{
 spinlock_t *ptl = pud_lockptr(mm, pud);

 spin_lock(ptl);
 return ptl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagetable_pud_ctor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 __folio_set_pgtable(folio);
 lruvec_stat_add_folio(folio, NR_PAGETABLE);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pagetable_pud_dtor(struct ptdesc *ptdesc)
{
 struct folio *folio = (_Generic((ptdesc), const struct ptdesc *: (const struct folio *)(ptdesc), struct ptdesc *: (struct folio *)(ptdesc)));

 __folio_clear_pgtable(folio);
 lruvec_stat_sub_folio(folio, NR_PAGETABLE);
}

extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) pagecache_init(void);
extern void free_initmem(void);







extern unsigned long free_reserved_area(void *start, void *end,
     int poison, const char *s);

extern void adjust_managed_page_count(struct page *page, long count);

extern void reserve_bootmem_region(phys_addr_t start,
       phys_addr_t end, int nid);


void free_reserved_page(struct page *page);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mark_page_reserved(struct page *page)
{
 SetPageReserved(page);
 adjust_managed_page_count(page, -1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void free_reserved_ptdesc(struct ptdesc *pt)
{
 free_reserved_page((_Generic((pt), const struct ptdesc *: (const struct page *)(pt), struct ptdesc *: (struct page *)(pt))));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long free_initmem_default(int poison)
{
 extern char __init_begin[], __init_end[];

 return free_reserved_area(&__init_begin, &__init_end,
      poison, "unused kernel image (initmem)");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long get_num_physpages(void)
{
 int nid;
 unsigned long phys_pages = 0;

 for ( (nid) = 0; (nid) == 0; (nid) = 1)
  phys_pages += (NODE_DATA(nid)->node_present_pages);

 return phys_pages;
}
# 3195 "../include/linux/mm.h"
void free_area_init(unsigned long *max_zone_pfn);
unsigned long node_map_pfn_alignment(void);
extern unsigned long absent_pages_in_range(unsigned long start_pfn,
      unsigned long end_pfn);
extern void get_pfn_range_for_nid(unsigned int nid,
   unsigned long *start_pfn, unsigned long *end_pfn);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int early_pfn_to_nid(unsigned long pfn)
{
 return 0;
}





extern void mem_init(void);
extern void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) mmap_init(void);

extern void __show_mem(unsigned int flags, nodemask_t *nodemask, int max_zone_idx);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void show_mem(void)
{
 __show_mem(0, ((void *)0), 2 - 1);
}
extern long si_mem_available(void);
extern void si_meminfo(struct sysinfo * val);
extern void si_meminfo_node(struct sysinfo *val, int nid);

extern __attribute__((__format__(printf, 3, 4)))
void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...);

extern void setup_per_cpu_pageset(void);


extern atomic_long_t mmap_pages_allocated;
extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t);


void vma_interval_tree_insert(struct vm_area_struct *node,
         struct rb_root_cached *root);
void vma_interval_tree_insert_after(struct vm_area_struct *node,
        struct vm_area_struct *prev,
        struct rb_root_cached *root);
void vma_interval_tree_remove(struct vm_area_struct *node,
         struct rb_root_cached *root);
struct vm_area_struct *vma_interval_tree_iter_first(struct rb_root_cached *root,
    unsigned long start, unsigned long last);
struct vm_area_struct *vma_interval_tree_iter_next(struct vm_area_struct *node,
    unsigned long start, unsigned long last);





void anon_vma_interval_tree_insert(struct anon_vma_chain *node,
       struct rb_root_cached *root);
void anon_vma_interval_tree_remove(struct anon_vma_chain *node,
       struct rb_root_cached *root);
struct anon_vma_chain *
anon_vma_interval_tree_iter_first(struct rb_root_cached *root,
      unsigned long start, unsigned long last);
struct anon_vma_chain *anon_vma_interval_tree_iter_next(
 struct anon_vma_chain *node, unsigned long start, unsigned long last);
# 3268 "../include/linux/mm.h"
extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
extern int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
        unsigned long start, unsigned long end, unsigned long pgoff,
        struct vm_area_struct *next);
extern int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
         unsigned long start, unsigned long end, unsigned long pgoff);
extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
extern void unlink_file_vma(struct vm_area_struct *);
extern struct vm_area_struct *copy_vma(struct vm_area_struct **,
 unsigned long addr, unsigned long len, unsigned long pgoff,
 bool *need_rmap_locks);
extern void exit_mmap(struct mm_struct *);
struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
      struct vm_area_struct *prev,
      struct vm_area_struct *vma,
      unsigned long start, unsigned long end,
      unsigned long vm_flags,
      struct mempolicy *policy,
      struct vm_userfaultfd_ctx uffd_ctx,
      struct anon_vma_name *anon_name);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct
*vma_modify_flags(struct vma_iterator *vmi,
    struct vm_area_struct *prev,
    struct vm_area_struct *vma,
    unsigned long start, unsigned long end,
    unsigned long new_flags)
{
 return vma_modify(vmi, prev, vma, start, end, new_flags,
     ((void *)0), vma->vm_userfaultfd_ctx,
     anon_vma_name(vma));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct
*vma_modify_flags_name(struct vma_iterator *vmi,
         struct vm_area_struct *prev,
         struct vm_area_struct *vma,
         unsigned long start,
         unsigned long end,
         unsigned long new_flags,
         struct anon_vma_name *new_name)
{
 return vma_modify(vmi, prev, vma, start, end, new_flags,
     ((void *)0), vma->vm_userfaultfd_ctx, new_name);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct
*vma_modify_policy(struct vma_iterator *vmi,
     struct vm_area_struct *prev,
     struct vm_area_struct *vma,
     unsigned long start, unsigned long end,
     struct mempolicy *new_pol)
{
 return vma_modify(vmi, prev, vma, start, end, vma->vm_flags,
     new_pol, vma->vm_userfaultfd_ctx, anon_vma_name(vma));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct
*vma_modify_flags_uffd(struct vma_iterator *vmi,
         struct vm_area_struct *prev,
         struct vm_area_struct *vma,
         unsigned long start, unsigned long end,
         unsigned long new_flags,
         struct vm_userfaultfd_ctx new_ctx)
{
 return vma_modify(vmi, prev, vma, start, end, new_flags,
     ((void *)0), new_ctx, anon_vma_name(vma));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int check_data_rlimit(unsigned long rlim,
        unsigned long new,
        unsigned long start,
        unsigned long end_data,
        unsigned long start_data)
{
 if (rlim < (~0UL)) {
  if (((new - start) + (end_data - start_data)) > rlim)
   return -28;
 }

 return 0;
}

extern int mm_take_all_locks(struct mm_struct *mm);
extern void mm_drop_all_locks(struct mm_struct *mm);

extern int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file);
extern int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file);
extern struct file *get_mm_exe_file(struct mm_struct *mm);
extern struct file *get_task_exe_file(struct task_struct *task);

extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long npages);
extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages);

extern bool vma_is_special_mapping(const struct vm_area_struct *vma,
       const struct vm_special_mapping *sm);
extern struct vm_area_struct *_install_special_mapping(struct mm_struct *mm,
       unsigned long addr, unsigned long len,
       unsigned long flags,
       const struct vm_special_mapping *spec);

extern int install_special_mapping(struct mm_struct *mm,
       unsigned long addr, unsigned long len,
       unsigned long flags, struct page **pages);

unsigned long randomize_stack_top(unsigned long stack_top);
unsigned long randomize_page(unsigned long start, unsigned long range);

unsigned long
__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
      unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
    unsigned long pgoff, unsigned long flags)
{
 return __get_unmapped_area(file, addr, len, pgoff, flags, 0);
}

extern unsigned long mmap_region(struct file *file, unsigned long addr,
 unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
 struct list_head *uf);
extern unsigned long do_mmap(struct file *file, unsigned long addr,
 unsigned long len, unsigned long prot, unsigned long flags,
 vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate,
 struct list_head *uf);
extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
    unsigned long start, size_t len, struct list_head *uf,
    bool unlock);
extern int do_munmap(struct mm_struct *, unsigned long, size_t,
       struct list_head *uf);
extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior);


extern int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
    unsigned long start, unsigned long end,
    struct list_head *uf, bool unlock);
extern int __mm_populate(unsigned long addr, unsigned long len,
    int ignore_errors);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mm_populate(unsigned long addr, unsigned long len)
{

 (void) __mm_populate(addr, len, 1);
}





extern int __attribute__((__warn_unused_result__)) vm_brk_flags(unsigned long, unsigned long, unsigned long);
extern int vm_munmap(unsigned long, size_t);
extern unsigned long __attribute__((__warn_unused_result__)) vm_mmap(struct file *, unsigned long,
        unsigned long, unsigned long,
        unsigned long, unsigned long);

struct vm_unmapped_area_info {

 unsigned long flags;
 unsigned long length;
 unsigned long low_limit;
 unsigned long high_limit;
 unsigned long align_mask;
 unsigned long align_offset;
 unsigned long start_gap;
};

extern unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info);


extern void truncate_inode_pages(struct address_space *, loff_t);
extern void truncate_inode_pages_range(struct address_space *,
           loff_t lstart, loff_t lend);
extern void truncate_inode_pages_final(struct address_space *);


extern vm_fault_t filemap_fault(struct vm_fault *vmf);
extern vm_fault_t filemap_map_pages(struct vm_fault *vmf,
  unsigned long start_pgoff, unsigned long end_pgoff);
extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);

extern unsigned long stack_guard_gap;

int expand_stack_locked(struct vm_area_struct *vma, unsigned long address);
struct vm_area_struct *expand_stack(struct mm_struct * mm, unsigned long addr);


int expand_downwards(struct vm_area_struct *vma, unsigned long address);


extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr);
extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr,
          struct vm_area_struct **pprev);





struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
   unsigned long start_addr, unsigned long end_addr);
# 3480 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
{
 return mtree_load(&mm->mm_mt, addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
{
 if (vma->vm_flags & 0x00000100)
  return stack_guard_gap;


 if (vma->vm_flags & 0x00000000)
  return (1UL << 14);

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vm_start_gap(struct vm_area_struct *vma)
{
 unsigned long gap = stack_guard_start_gap(vma);
 unsigned long vm_start = vma->vm_start;

 vm_start -= gap;
 if (vm_start > vma->vm_start)
  vm_start = 0;
 return vm_start;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vm_end_gap(struct vm_area_struct *vma)
{
 unsigned long vm_end = vma->vm_end;

 if (vma->vm_flags & 0x00000000) {
  vm_end += stack_guard_gap;
  if (vm_end < vma->vm_end)
   vm_end = -(1UL << 14);
 }
 return vm_end;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vma_pages(struct vm_area_struct *vma)
{
 return (vma->vm_end - vma->vm_start) >> 14;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
    unsigned long vm_start, unsigned long vm_end)
{
 struct vm_area_struct *vma = vma_lookup(mm, vm_start);

 if (vma && (vma->vm_start != vm_start || vma->vm_end != vm_end))
  vma = ((void *)0);

 return vma;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool range_in_vma(struct vm_area_struct *vma,
    unsigned long start, unsigned long end)
{
 return (vma && vma->vm_start <= start && end <= vma->vm_end);
}


pgprot_t vm_get_page_prot(unsigned long vm_flags);
void vma_set_page_prot(struct vm_area_struct *vma);
# 3558 "../include/linux/mm.h"
void vma_set_file(struct vm_area_struct *vma, struct file *file);






struct vm_area_struct *find_extend_vma_locked(struct mm_struct *,
  unsigned long addr);
int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
   unsigned long pfn, unsigned long size, pgprot_t);
int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
  unsigned long pfn, unsigned long size, pgprot_t prot);
int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
   struct page **pages, unsigned long *num);
int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
    unsigned long num);
int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
    unsigned long num);
vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
   unsigned long pfn);
vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
   unsigned long pfn, pgprot_t pgprot);
vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
   pfn_t pfn);
vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
  unsigned long addr, pfn_t pfn);
int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
    unsigned long addr, struct page *page)
{
 int err = vm_insert_page(vma, addr, page);

 if (err == -12)
  return VM_FAULT_OOM;
 if (err < 0 && err != -16)
  return VM_FAULT_SIGBUS;

 return VM_FAULT_NOPAGE;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int io_remap_pfn_range(struct vm_area_struct *vma,
         unsigned long addr, unsigned long pfn,
         unsigned long size, pgprot_t prot)
{
 return remap_pfn_range(vma, addr, pfn, size, (prot));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vm_fault_t vmf_error(int err)
{
 if (err == -12)
  return VM_FAULT_OOM;
 else if (err == -133)
  return VM_FAULT_HWPOISON;
 return VM_FAULT_SIGBUS;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) vm_fault_t vmf_fs_error(int err)
{
 if (err == 0)
  return VM_FAULT_LOCKED;
 if (err == -14 || err == -11)
  return VM_FAULT_NOPAGE;
 if (err == -12)
  return VM_FAULT_OOM;

 return VM_FAULT_SIGBUS;
}

struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
    unsigned int foll_flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
{
 if (vm_fault & VM_FAULT_OOM)
  return -12;
 if (vm_fault & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE))
  return (foll_flags & FOLL_HWPOISON) ? -133 : -14;
 if (vm_fault & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV))
  return -14;
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool gup_can_follow_protnone(struct vm_area_struct *vma,
        unsigned int flags)
{




 if (!(flags & FOLL_HONOR_NUMA_FAULT))
  return true;
# 3672 "../include/linux/mm.h"
 return !vma_is_accessible(vma);
}

typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data);
extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
          unsigned long size, pte_fn_t fn, void *data);
extern int apply_to_existing_page_range(struct mm_struct *mm,
       unsigned long address, unsigned long size,
       pte_fn_t fn, void *data);


extern void __kernel_poison_pages(struct page *page, int numpages);
extern void __kernel_unpoison_pages(struct page *page, int numpages);
extern bool _page_poisoning_enabled_early;
extern struct static_key_false _page_poisoning_enabled;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_poisoning_enabled(void)
{
 return _page_poisoning_enabled_early;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_poisoning_enabled_static(void)
{
 return __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&_page_poisoning_enabled)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&_page_poisoning_enabled)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&_page_poisoning_enabled)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&_page_poisoning_enabled)->key) > 0; })), 0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kernel_poison_pages(struct page *page, int numpages)
{
 if (page_poisoning_enabled_static())
  __kernel_poison_pages(page, numpages);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kernel_unpoison_pages(struct page *page, int numpages)
{
 if (page_poisoning_enabled_static())
  __kernel_unpoison_pages(page, numpages);
}
# 3717 "../include/linux/mm.h"
extern struct static_key_false init_on_alloc;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool want_init_on_alloc(gfp_t flags)
{
 if ((0 ? __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&init_on_alloc)->key) > 0; })), 1) : __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&init_on_alloc)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&init_on_alloc)->key) > 0; })), 0)))

  return true;
 return flags & (( gfp_t)((((1UL))) << (___GFP_ZERO_BIT)));
}

extern struct static_key_true init_on_free;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool want_init_on_free(void)
{
 return (1 ? __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&init_on_free)->key) > 0; })), 1) : __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&init_on_free)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&init_on_free)->key) > 0; })), 0));

}

extern bool _debug_pagealloc_enabled_early;
extern struct static_key_false _debug_pagealloc_enabled;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool debug_pagealloc_enabled(void)
{
 return 1 &&
  _debug_pagealloc_enabled_early;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool debug_pagealloc_enabled_static(void)
{
 if (!1)
  return false;

 return __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&_debug_pagealloc_enabled)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&_debug_pagealloc_enabled)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&_debug_pagealloc_enabled)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&_debug_pagealloc_enabled)->key) > 0; })), 0);
}





extern void __kernel_map_pages(struct page *page, int numpages, int enable);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void debug_pagealloc_map_pages(struct page *page, int numpages)
{
 if (debug_pagealloc_enabled_static())
  __kernel_map_pages(page, numpages, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void debug_pagealloc_unmap_pages(struct page *page, int numpages)
{
 if (debug_pagealloc_enabled_static())
  __kernel_map_pages(page, numpages, 0);
}

extern unsigned int _debug_guardpage_minorder;
extern struct static_key_false _debug_guardpage_enabled;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int debug_guardpage_minorder(void)
{
 return _debug_guardpage_minorder;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool debug_guardpage_enabled(void)
{
 return __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&_debug_guardpage_enabled)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&_debug_guardpage_enabled)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&_debug_guardpage_enabled)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&_debug_guardpage_enabled)->key) > 0; })), 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool page_is_guard(struct page *page)
{
 if (!debug_guardpage_enabled())
  return false;

 return PageGuard(page);
}

bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool set_page_guard(struct zone *zone, struct page *page,
      unsigned int order)
{
 if (!debug_guardpage_enabled())
  return false;
 return __set_page_guard(zone, page, order);
}

void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_page_guard(struct zone *zone, struct page *page,
        unsigned int order)
{
 if (!debug_guardpage_enabled())
  return;
 __clear_page_guard(zone, page, order);
}
# 3828 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int in_gate_area_no_mm(unsigned long addr) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int in_gate_area(struct mm_struct *mm, unsigned long addr)
{
 return 0;
}


extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm);


extern int sysctl_drop_caches;
int drop_caches_sysctl_handler(const struct ctl_table *, int, void *, size_t *,
  loff_t *);


void drop_slab(void);




extern int randomize_va_space;


const char * arch_vma_name(struct vm_area_struct *vma);

void print_vma_addr(char *prefix, unsigned long rip);






void *sparse_buffer_alloc(unsigned long size);
struct page * __populate_section_memmap(unsigned long pfn,
  unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
  struct dev_pagemap *pgmap);
void pmd_init(void *addr);
void pud_init(void *addr);
pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node);
pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node);
pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node);
pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node,
       struct vmem_altmap *altmap, struct page *reuse);
void *vmemmap_alloc_block(unsigned long size, int node);
struct vmem_altmap;
void *vmemmap_alloc_block_buf(unsigned long size, int node,
         struct vmem_altmap *altmap);
void vmemmap_verify(pte_t *, int, unsigned long, unsigned long);
void vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
       unsigned long addr, unsigned long next);
int vmemmap_check_pmd(pmd_t *pmd, int node,
        unsigned long addr, unsigned long next);
int vmemmap_populate_basepages(unsigned long start, unsigned long end,
          int node, struct vmem_altmap *altmap);
int vmemmap_populate_hugepages(unsigned long start, unsigned long end,
          int node, struct vmem_altmap *altmap);
int vmemmap_populate(unsigned long start, unsigned long end, int node,
  struct vmem_altmap *altmap);
void vmemmap_populate_print_last(void);
# 3912 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vmem_altmap_free(struct vmem_altmap *altmap,
        unsigned long nr_pfns)
{
}
# 3950 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vmemmap_can_optimize(struct vmem_altmap *altmap,
        struct dev_pagemap *pgmap)
{
 return false;
}


void register_page_bootmem_memmap(unsigned long section_nr, struct page *map,
      unsigned long nr_pages);

enum mf_flags {
 MF_COUNT_INCREASED = 1 << 0,
 MF_ACTION_REQUIRED = 1 << 1,
 MF_MUST_KILL = 1 << 2,
 MF_SOFT_OFFLINE = 1 << 3,
 MF_UNPOISON = 1 << 4,
 MF_SW_SIMULATED = 1 << 5,
 MF_NO_RETRY = 1 << 6,
 MF_MEM_PRE_REMOVE = 1 << 7,
};
int mf_dax_kill_procs(struct address_space *mapping, unsigned long index,
        unsigned long count, int mf_flags);
extern int memory_failure(unsigned long pfn, int flags);
extern void memory_failure_queue_kick(int cpu);
extern int unpoison_memory(unsigned long pfn);
extern atomic_long_t num_poisoned_pages ;
extern int soft_offline_page(unsigned long pfn, int flags);
# 3988 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memory_failure_queue(unsigned long pfn, int flags)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
     bool *migratable_cleared)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void num_poisoned_pages_inc(unsigned long pfn)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void num_poisoned_pages_sub(unsigned long pfn, long i)
{
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memblk_nr_poison_inc(unsigned long pfn)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memblk_nr_poison_sub(unsigned long pfn, long i)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_memory_failure(unsigned long pfn, int flags)
{
 return -6;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool arch_is_platform_page(u64 paddr)
{
 return false;
}





enum mf_result {
 MF_IGNORED,
 MF_FAILED,
 MF_DELAYED,
 MF_RECOVERED,
};

enum mf_action_page_type {
 MF_MSG_KERNEL,
 MF_MSG_KERNEL_HIGH_ORDER,
 MF_MSG_DIFFERENT_COMPOUND,
 MF_MSG_HUGE,
 MF_MSG_FREE_HUGE,
 MF_MSG_GET_HWPOISON,
 MF_MSG_UNMAP_FAILED,
 MF_MSG_DIRTY_SWAPCACHE,
 MF_MSG_CLEAN_SWAPCACHE,
 MF_MSG_DIRTY_MLOCKED_LRU,
 MF_MSG_CLEAN_MLOCKED_LRU,
 MF_MSG_DIRTY_UNEVICTABLE_LRU,
 MF_MSG_CLEAN_UNEVICTABLE_LRU,
 MF_MSG_DIRTY_LRU,
 MF_MSG_CLEAN_LRU,
 MF_MSG_TRUNCATED_LRU,
 MF_MSG_BUDDY,
 MF_MSG_DAX,
 MF_MSG_UNSPLIT_THP,
 MF_MSG_ALREADY_POISONED,
 MF_MSG_UNKNOWN,
};
# 4098 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void setup_nr_node_ids(void) {}


extern int memcmp_pages(struct page *page1, struct page *page2);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pages_identical(struct page *page1, struct page *page2)
{
 return !memcmp_pages(page1, page2);
}
# 4120 "../include/linux/mm.h"
extern int sysctl_nr_trim_pages;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_dump_obj(void *object) {}
# 4137 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int seal_check_write(int seals, struct vm_area_struct *vma)
{
 if (seals & (0x0008 | 0x0010)) {




  if ((vma->vm_flags & 0x00000008) && (vma->vm_flags & 0x00000002))
   return -1;
# 4154 "../include/linux/mm.h"
  if (vma->vm_flags & 0x00000008)
   vm_flags_clear(vma, 0x00000020);
 }

 return 0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
        unsigned long len_in, struct anon_vma_name *anon_name) {
 return 0;
}
# 4180 "../include/linux/mm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool range_contains_unaccepted_memory(phys_addr_t start,
          phys_addr_t end)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void accept_memory(phys_addr_t start, phys_addr_t end)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pfn_is_unaccepted_memory(unsigned long pfn)
{
 phys_addr_t paddr = pfn << 14;

 return range_contains_unaccepted_memory(paddr, paddr + (1UL << 14));
}

void vma_pgtable_walk_begin(struct vm_area_struct *vma);
void vma_pgtable_walk_end(struct vm_area_struct *vma);

int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t *size);
# 11 "../include/linux/highmem.h" 2

# 1 "../include/linux/hardirq.h" 1




# 1 "../include/linux/context_tracking_state.h" 1





# 1 "../include/linux/static_key.h" 1
# 7 "../include/linux/context_tracking_state.h" 2





enum ctx_state {
 CONTEXT_DISABLED = -1,
 CONTEXT_KERNEL = 0,
 CONTEXT_IDLE = 1,
 CONTEXT_USER = 2,
 CONTEXT_GUEST = 3,
 CONTEXT_MAX = 4,
};







struct context_tracking {
# 45 "../include/linux/context_tracking_state.h"
};
# 143 "../include/linux/context_tracking_state.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool context_tracking_enabled(void) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool context_tracking_enabled_cpu(int cpu) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool context_tracking_enabled_this_cpu(void) { return false; }
# 6 "../include/linux/hardirq.h" 2


# 1 "../include/linux/ftrace_irq.h" 1
# 15 "../include/linux/ftrace_irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ftrace_nmi_enter(void)
{
# 25 "../include/linux/ftrace_irq.h"
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ftrace_nmi_exit(void)
{
# 37 "../include/linux/ftrace_irq.h"
}
# 9 "../include/linux/hardirq.h" 2

# 1 "../include/linux/vtime.h" 1
# 23 "../include/linux/vtime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_user_enter(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_user_exit(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_guest_enter(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_guest_exit(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_init_idle(struct task_struct *tsk, int cpu) { }
# 36 "../include/linux/vtime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_account_irq(struct task_struct *tsk, unsigned int offset) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_account_softirq(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_account_hardirq(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_flush(struct task_struct *tsk) { }
# 111 "../include/linux/vtime.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool vtime_accounting_enabled_this_cpu(void) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vtime_task_switch(struct task_struct *prev) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void vtime_account_guest_enter(void)
{
 (__current_thread_info->task)->flags |= 0x00000001;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void vtime_account_guest_exit(void)
{
 (__current_thread_info->task)->flags &= ~0x00000001;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqtime_account_irq(struct task_struct *tsk, unsigned int offset) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void account_softirq_enter(struct task_struct *tsk)
{
 vtime_account_irq(tsk, (1UL << (0 + 8)));
 irqtime_account_irq(tsk, (1UL << (0 + 8)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void account_softirq_exit(struct task_struct *tsk)
{
 vtime_account_softirq(tsk);
 irqtime_account_irq(tsk, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void account_hardirq_enter(struct task_struct *tsk)
{
 vtime_account_irq(tsk, (1UL << ((0 + 8) + 8)));
 irqtime_account_irq(tsk, (1UL << ((0 + 8) + 8)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void account_hardirq_exit(struct task_struct *tsk)
{
 vtime_account_hardirq(tsk);
 irqtime_account_irq(tsk, 0);
}
# 11 "../include/linux/hardirq.h" 2
# 1 "./arch/hexagon/include/generated/asm/hardirq.h" 1
# 1 "../include/asm-generic/hardirq.h" 1







typedef struct {
 unsigned int __softirq_pending;



} __attribute__((__aligned__((1 << (5))))) irq_cpustat_t;

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_irq_stat; extern __attribute__((section(".data" "..shared_aligned"))) __typeof__(irq_cpustat_t) irq_stat __attribute__((__aligned__((1 << (5)))));

# 1 "../include/linux/irq.h" 1
# 16 "../include/linux/irq.h"
# 1 "../include/linux/irqhandler.h" 1
# 10 "../include/linux/irqhandler.h"
struct irq_desc;

typedef void (*irq_flow_handler_t)(struct irq_desc *desc);
# 17 "../include/linux/irq.h" 2
# 1 "../include/linux/irqreturn.h" 1
# 11 "../include/linux/irqreturn.h"
enum irqreturn {
 IRQ_NONE = (0 << 0),
 IRQ_HANDLED = (1 << 0),
 IRQ_WAKE_THREAD = (1 << 1),
};

typedef enum irqreturn irqreturn_t;
# 18 "../include/linux/irq.h" 2


# 1 "../include/linux/io.h" 1
# 14 "../include/linux/io.h"
# 1 "../arch/hexagon/include/asm/io.h" 1
# 14 "../arch/hexagon/include/asm/io.h"
# 1 "./arch/hexagon/include/generated/asm/iomap.h" 1
# 1 "../include/asm-generic/iomap.h" 1
# 29 "../include/asm-generic/iomap.h"
extern unsigned int ioread8(const void *);
extern unsigned int ioread16(const void *);
extern unsigned int ioread16be(const void *);
extern unsigned int ioread32(const void *);
extern unsigned int ioread32be(const void *);
# 50 "../include/asm-generic/iomap.h"
extern void iowrite8(u8, void *);
extern void iowrite16(u16, void *);
extern void iowrite16be(u16, void *);
extern void iowrite32(u32, void *);
extern void iowrite32be(u32, void *);
# 82 "../include/asm-generic/iomap.h"
extern void ioread8_rep(const void *port, void *buf, unsigned long count);
extern void ioread16_rep(const void *port, void *buf, unsigned long count);
extern void ioread32_rep(const void *port, void *buf, unsigned long count);

extern void iowrite8_rep(void *port, const void *buf, unsigned long count);
extern void iowrite16_rep(void *port, const void *buf, unsigned long count);
extern void iowrite32_rep(void *port, const void *buf, unsigned long count);



extern void *ioport_map(unsigned long port, unsigned int nr);
extern void ioport_unmap(void *);
# 107 "../include/asm-generic/iomap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ioremap_np(phys_addr_t offset, size_t size)
{
 return ((void *)0);
}


# 1 "../include/asm-generic/pci_iomap.h" 1
# 10 "../include/asm-generic/pci_iomap.h"
struct pci_dev;
# 35 "../include/asm-generic/pci_iomap.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *pci_iomap(struct pci_dev *dev, int bar, unsigned long max)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned long max)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *pci_iomap_range(struct pci_dev *dev, int bar,
         unsigned long offset,
         unsigned long maxlen)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *pci_iomap_wc_range(struct pci_dev *dev, int bar,
            unsigned long offset,
            unsigned long maxlen)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pci_iounmap(struct pci_dev *dev, void *addr)
{ }
# 114 "../include/asm-generic/iomap.h" 2
# 2 "./arch/hexagon/include/generated/asm/iomap.h" 2
# 15 "../arch/hexagon/include/asm/io.h" 2
# 27 "../arch/hexagon/include/asm/io.h"
extern int remap_area_pages(unsigned long start, unsigned long phys_addr,
    unsigned long end, unsigned long flags);


extern void __raw_readsw(const void *addr, void *data, int wordlen);
extern void __raw_writesw(void *addr, const void *data, int wordlen);

extern void __raw_readsl(const void *addr, void *data, int wordlen);
extern void __raw_writesl(void *addr, const void *data, int wordlen);
# 47 "../arch/hexagon/include/asm/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long virt_to_phys(volatile void *address)
{
 return ((unsigned long)(address) - (0xc0000000UL) + __phys_offset);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *phys_to_virt(unsigned long address)
{
 return ((void *)((unsigned long)(address) - __phys_offset + (0xc0000000UL)));
}
# 75 "../arch/hexagon/include/asm/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 readb(const volatile void *addr)
{
 u8 val;
 asm volatile(
  "%0 = memb(%1);"
  : "=&r" (val)
  : "r" (addr)
 );
 return val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 readw(const volatile void *addr)
{
 u16 val;
 asm volatile(
  "%0 = memh(%1);"
  : "=&r" (val)
  : "r" (addr)
 );
 return val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 readl(const volatile void *addr)
{
 u32 val;
 asm volatile(
  "%0 = memw(%1);"
  : "=&r" (val)
  : "r" (addr)
 );
 return val;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void writeb(u8 data, volatile void *addr)
{
 asm volatile(
  "memb(%0) = %1;"
  :
  : "r" (addr), "r" (data)
  : "memory"
 );
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void writew(u16 data, volatile void *addr)
{
 asm volatile(
  "memh(%0) = %1;"
  :
  : "r" (addr), "r" (data)
  : "memory"
 );

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void writel(u32 data, volatile void *addr)
{
 asm volatile(
  "memw(%0) = %1;"
  :
  : "r" (addr), "r" (data)
  : "memory"
 );
}
# 173 "../arch/hexagon/include/asm/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_fromio(void *dst, const volatile void *src,
 int count)
{
 memcpy(dst, (void *) src, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_toio(volatile void *dst, const void *src,
 int count)
{
 memcpy((void *) dst, src, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memset_io(volatile void *addr, int value,
        size_t size)
{
 memset((void *)addr, value, size);
}
# 199 "../arch/hexagon/include/asm/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 inb(unsigned long port)
{
 return readb(((void *)0xfe000000) + (port & 0xffff));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 inw(unsigned long port)
{
 return readw(((void *)0xfe000000) + (port & 0xffff));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 inl(unsigned long port)
{
 return readl(((void *)0xfe000000) + (port & 0xffff));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outb(u8 data, unsigned long port)
{
 writeb(data, ((void *)0xfe000000) + (port & 0xffff));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outw(u16 data, unsigned long port)
{
 writew(data, ((void *)0xfe000000) + (port & 0xffff));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outl(u32 data, unsigned long port)
{
 writel(data, ((void *)0xfe000000) + (port & 0xffff));
}
# 242 "../arch/hexagon/include/asm/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insb(unsigned long port, void *buffer, int count)
{
 if (count) {
  u8 *buf = buffer;
  do {
   u8 x = inb(port);
   *buf++ = x;
  } while (--count);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insw(unsigned long port, void *buffer, int count)
{
 if (count) {
  u16 *buf = buffer;
  do {
   u16 x = inw(port);
   *buf++ = x;
  } while (--count);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insl(unsigned long port, void *buffer, int count)
{
 if (count) {
  u32 *buf = buffer;
  do {
   u32 x = inw(port);
   *buf++ = x;
  } while (--count);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsb(unsigned long port, const void *buffer, int count)
{
 if (count) {
  const u8 *buf = buffer;
  do {
   outb(*buf++, port);
  } while (--count);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsw(unsigned long port, const void *buffer, int count)
{
 if (count) {
  const u16 *buf = buffer;
  do {
   outw(*buf++, port);
  } while (--count);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsl(unsigned long port, const void *buffer, int count)
{
 if (count) {
  const u32 *buf = buffer;
  do {
   outl(*buf++, port);
  } while (--count);
 }
}
# 328 "../arch/hexagon/include/asm/io.h"
# 1 "../include/asm-generic/io.h" 1
# 20 "../include/asm-generic/io.h"
# 1 "./arch/hexagon/include/generated/asm/mmiowb.h" 1
# 21 "../include/asm-generic/io.h" 2
# 94 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void log_write_mmio(u64 val, u8 width, volatile void *addr,
      unsigned long caller_addr, unsigned long caller_addr0) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void log_post_write_mmio(u64 val, u8 width, volatile void *addr,
           unsigned long caller_addr, unsigned long caller_addr0) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void log_read_mmio(u8 width, const volatile void *addr,
     unsigned long caller_addr, unsigned long caller_addr0) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void log_post_read_mmio(u64 val, u8 width, const volatile void *addr,
          unsigned long caller_addr, unsigned long caller_addr0) {}
# 401 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void readsb(const volatile void *addr, void *buffer,
     unsigned int count)
{
 if (count) {
  u8 *buf = buffer;

  do {
   u8 x = readb(addr);
   *buf++ = x;
  } while (--count);
 }
}
# 467 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void writesb(volatile void *addr, const void *buffer,
      unsigned int count)
{
 if (count) {
  const u8 *buf = buffer;

  do {
   writeb(*buf++, addr);
  } while (--count);
 }
}
# 543 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 _inb(unsigned long addr)
{
 u8 val;

 __asm__ __volatile__("": : :"memory");
 val = readb(((void *)0) + addr);
 __asm__ __volatile__("": : :"memory");
 return val;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 _inw(unsigned long addr)
{
 u16 val;

 __asm__ __volatile__("": : :"memory");
 val = (( __u16)(__le16)((__le16 )readw(((void *)0) + addr)));
 __asm__ __volatile__("": : :"memory");
 return val;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 _inl(unsigned long addr)
{
 u32 val;

 __asm__ __volatile__("": : :"memory");
 val = (( __u32)(__le32)((__le32 )readl(((void *)0) + addr)));
 __asm__ __volatile__("": : :"memory");
 return val;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void _outb(u8 value, unsigned long addr)
{
 __asm__ __volatile__("": : :"memory");
 writeb(value, ((void *)0) + addr);
 do { } while (0);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void _outw(u16 value, unsigned long addr)
{
 __asm__ __volatile__("": : :"memory");
 writew((u16 )(( __le16)(__u16)(value)), ((void *)0) + addr);
 do { } while (0);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void _outl(u32 value, unsigned long addr)
{
 __asm__ __volatile__("": : :"memory");
 writel((u32 )(( __le32)(__u32)(value)), ((void *)0) + addr);
 do { } while (0);
}


# 1 "../include/linux/logic_pio.h" 1
# 11 "../include/linux/logic_pio.h"
# 1 "../include/linux/fwnode.h" 1
# 17 "../include/linux/fwnode.h"
enum dev_dma_attr {
 DEV_DMA_NOT_SUPPORTED,
 DEV_DMA_NON_COHERENT,
 DEV_DMA_COHERENT,
};

struct fwnode_operations;
struct device;
# 47 "../include/linux/fwnode.h"
struct fwnode_handle {
 struct fwnode_handle *secondary;
 const struct fwnode_operations *ops;


 struct device *dev;
 struct list_head suppliers;
 struct list_head consumers;
 u8 flags;
};
# 67 "../include/linux/fwnode.h"
struct fwnode_link {
 struct fwnode_handle *supplier;
 struct list_head s_hook;
 struct fwnode_handle *consumer;
 struct list_head c_hook;
 u8 flags;
};







struct fwnode_endpoint {
 unsigned int port;
 unsigned int id;
 const struct fwnode_handle *local_fwnode;
};
# 102 "../include/linux/fwnode.h"
struct fwnode_reference_args {
 struct fwnode_handle *fwnode;
 unsigned int nargs;
 u64 args[8];
};
# 133 "../include/linux/fwnode.h"
struct fwnode_operations {
 struct fwnode_handle *(*get)(struct fwnode_handle *fwnode);
 void (*put)(struct fwnode_handle *fwnode);
 bool (*device_is_available)(const struct fwnode_handle *fwnode);
 const void *(*device_get_match_data)(const struct fwnode_handle *fwnode,
          const struct device *dev);
 bool (*device_dma_supported)(const struct fwnode_handle *fwnode);
 enum dev_dma_attr
 (*device_get_dma_attr)(const struct fwnode_handle *fwnode);
 bool (*property_present)(const struct fwnode_handle *fwnode,
     const char *propname);
 int (*property_read_int_array)(const struct fwnode_handle *fwnode,
           const char *propname,
           unsigned int elem_size, void *val,
           size_t nval);
 int
 (*property_read_string_array)(const struct fwnode_handle *fwnode_handle,
          const char *propname, const char **val,
          size_t nval);
 const char *(*get_name)(const struct fwnode_handle *fwnode);
 const char *(*get_name_prefix)(const struct fwnode_handle *fwnode);
 struct fwnode_handle *(*get_parent)(const struct fwnode_handle *fwnode);
 struct fwnode_handle *
 (*get_next_child_node)(const struct fwnode_handle *fwnode,
          struct fwnode_handle *child);
 struct fwnode_handle *
 (*get_named_child_node)(const struct fwnode_handle *fwnode,
    const char *name);
 int (*get_reference_args)(const struct fwnode_handle *fwnode,
      const char *prop, const char *nargs_prop,
      unsigned int nargs, unsigned int index,
      struct fwnode_reference_args *args);
 struct fwnode_handle *
 (*graph_get_next_endpoint)(const struct fwnode_handle *fwnode,
       struct fwnode_handle *prev);
 struct fwnode_handle *
 (*graph_get_remote_endpoint)(const struct fwnode_handle *fwnode);
 struct fwnode_handle *
 (*graph_get_port_parent)(struct fwnode_handle *fwnode);
 int (*graph_parse_endpoint)(const struct fwnode_handle *fwnode,
        struct fwnode_endpoint *endpoint);
 void *(*iomap)(struct fwnode_handle *fwnode, int index);
 int (*irq_get)(const struct fwnode_handle *fwnode, unsigned int index);
 int (*add_links)(struct fwnode_handle *fwnode);
};
# 199 "../include/linux/fwnode.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fwnode_init(struct fwnode_handle *fwnode,
          const struct fwnode_operations *ops)
{
 fwnode->ops = ops;
 INIT_LIST_HEAD(&fwnode->consumers);
 INIT_LIST_HEAD(&fwnode->suppliers);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fwnode_dev_initialized(struct fwnode_handle *fwnode,
       bool initialized)
{
 if (IS_ERR_OR_NULL(fwnode))
  return;

 if (initialized)
  fwnode->flags |= ((((1UL))) << (2));
 else
  fwnode->flags &= ~((((1UL))) << (2));
}

int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup,
      u8 flags);
void fwnode_links_purge(struct fwnode_handle *fwnode);
void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode);
bool fw_devlink_is_strict(void);
# 12 "../include/linux/logic_pio.h" 2

enum {
 LOGIC_PIO_INDIRECT,
 LOGIC_PIO_CPU_MMIO,
};

struct logic_pio_hwaddr {
 struct list_head list;
 struct fwnode_handle *fwnode;
 resource_size_t hw_start;
 resource_size_t io_start;
 resource_size_t size;
 unsigned long flags;

 void *hostdata;
 const struct logic_pio_host_ops *ops;
};

struct logic_pio_host_ops {
 u32 (*in)(void *hostdata, unsigned long addr, size_t dwidth);
 void (*out)(void *hostdata, unsigned long addr, u32 val,
      size_t dwidth);
 u32 (*ins)(void *hostdata, unsigned long addr, void *buffer,
     size_t dwidth, unsigned int count);
 void (*outs)(void *hostdata, unsigned long addr, const void *buffer,
       size_t dwidth, unsigned int count);
};
# 113 "../include/linux/logic_pio.h"
struct logic_pio_hwaddr *find_io_range_by_fwnode(struct fwnode_handle *fwnode);
unsigned long logic_pio_trans_hwaddr(struct fwnode_handle *fwnode,
   resource_size_t hw_addr, resource_size_t size);
int logic_pio_register_range(struct logic_pio_hwaddr *newrange);
void logic_pio_unregister_range(struct logic_pio_hwaddr *range);
resource_size_t logic_pio_to_hwaddr(unsigned long pio);
unsigned long logic_pio_trans_cpuaddr(resource_size_t hw_addr);
# 611 "../include/asm-generic/io.h" 2
# 742 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insb_p(unsigned long addr, void *buffer, unsigned int count)
{
 insb(addr, buffer, count);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insw_p(unsigned long addr, void *buffer, unsigned int count)
{
 insw(addr, buffer, count);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void insl_p(unsigned long addr, void *buffer, unsigned int count)
{
 insl(addr, buffer, count);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsb_p(unsigned long addr, const void *buffer,
      unsigned int count)
{
 outsb(addr, buffer, count);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsw_p(unsigned long addr, const void *buffer,
      unsigned int count)
{
 outsw(addr, buffer, count);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void outsl_p(unsigned long addr, const void *buffer,
      unsigned int count)
{
 outsl(addr, buffer, count);
}
# 1050 "../include/asm-generic/io.h"
void *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
       pgprot_t prot);

void *ioremap_prot(phys_addr_t phys_addr, size_t size,
      unsigned long prot);
void iounmap(volatile void *addr);
void generic_iounmap(volatile void *addr);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ioremap(phys_addr_t addr, size_t size)
{

 return ioremap_prot(addr, size, ((1<<0) | (1<<9) | (1<<10) | (0x4 << 6)));
}
# 1085 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ioremap_uc(phys_addr_t offset, size_t size)
{
 return ((void *)0);
}
# 1127 "../include/asm-generic/io.h"
extern void *ioport_map(unsigned long port, unsigned int nr);
extern void ioport_unmap(void *p);
# 1140 "../include/asm-generic/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *xlate_dev_mem_ptr(phys_addr_t addr)
{
 return ((void *)((unsigned long)(addr) - __phys_offset + (0xc0000000UL)));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
{
}
# 1205 "../include/asm-generic/io.h"
extern int devmem_is_allowed(unsigned long pfn);
# 329 "../arch/hexagon/include/asm/io.h" 2
# 15 "../include/linux/io.h" 2


struct device;
struct resource;


void __iowrite32_copy(void *to, const void *from, size_t count);


void __ioread32_copy(void *to, const void *from, size_t count);


void __iowrite64_copy(void *to, const void *from, size_t count);



int ioremap_page_range(unsigned long addr, unsigned long end,
         phys_addr_t phys_addr, pgprot_t prot);
int vmap_page_range(unsigned long addr, unsigned long end,
      phys_addr_t phys_addr, pgprot_t prot);
# 52 "../include/linux/io.h"
void * devm_ioport_map(struct device *dev, unsigned long port,
          unsigned int nr);
void devm_ioport_unmap(struct device *dev, void *addr);
# 70 "../include/linux/io.h"
void *devm_ioremap(struct device *dev, resource_size_t offset,
      resource_size_t size);
void *devm_ioremap_uc(struct device *dev, resource_size_t offset,
       resource_size_t size);
void *devm_ioremap_wc(struct device *dev, resource_size_t offset,
       resource_size_t size);
void devm_iounmap(struct device *dev, void *addr);
int check_signature(const volatile void *io_addr,
   const unsigned char *signature, int length);
void devm_ioremap_release(struct device *dev, void *res);

void *devm_memremap(struct device *dev, resource_size_t offset,
  size_t size, unsigned long flags);
void devm_memunmap(struct device *dev, void *addr);


pgprot_t __attribute__((__section__(".init.text"))) __attribute__((__cold__)) early_memremap_pgprot_adjust(resource_size_t phys_addr,
     unsigned long size, pgprot_t prot);
# 132 "../include/linux/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) arch_phys_wc_add(unsigned long base,
      unsigned long size)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_phys_wc_del(int handle)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_phys_wc_index(int handle)
{
 return -1;
}




int devm_arch_phys_wc_add(struct device *dev, unsigned long base, unsigned long size);

enum {

 MEMREMAP_WB = 1 << 0,
 MEMREMAP_WT = 1 << 1,
 MEMREMAP_WC = 1 << 2,
 MEMREMAP_ENC = 1 << 3,
 MEMREMAP_DEC = 1 << 4,
};

void *memremap(resource_size_t offset, size_t size, unsigned long flags);
void memunmap(void *addr);
# 176 "../include/linux/io.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int arch_io_reserve_memtype_wc(resource_size_t base,
          resource_size_t size)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_io_free_memtype_wc(resource_size_t base,
        resource_size_t size)
{
}


int devm_arch_io_reserve_memtype_wc(struct device *dev, resource_size_t start,
        resource_size_t size);
# 21 "../include/linux/irq.h" 2


# 1 "../arch/hexagon/include/asm/irq.h" 1
# 21 "../arch/hexagon/include/asm/irq.h"
# 1 "../include/asm-generic/irq.h" 1
# 14 "../include/asm-generic/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_canonicalize(int irq)
{
 return irq;
}
# 22 "../arch/hexagon/include/asm/irq.h" 2

struct pt_regs;
void arch_do_IRQ(struct pt_regs *);
# 24 "../include/linux/irq.h" 2

# 1 "./arch/hexagon/include/generated/asm/irq_regs.h" 1
# 1 "../include/asm-generic/irq_regs.h" 1
# 17 "../include/asm-generic/irq_regs.h"
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope___irq_regs; extern __attribute__((section(".data" ""))) __typeof__(struct pt_regs *) __irq_regs;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pt_regs *get_irq_regs(void)
{
 return ({ __this_cpu_preempt_check("read"); ({ typeof(__irq_regs) pscr_ret__; do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(__irq_regs)) { case 1: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 2: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 4: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 8: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; }); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pt_regs *set_irq_regs(struct pt_regs *new_regs)
{
 struct pt_regs *old_regs;

 old_regs = ({ __this_cpu_preempt_check("read"); ({ typeof(__irq_regs) pscr_ret__; do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(__irq_regs)) { case 1: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 2: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 4: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; case 8: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }); }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; }); });
 ({ __this_cpu_preempt_check("write"); do { do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(__irq_regs)) { case 1: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }) = new_regs; } while (0);break; case 2: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }) = new_regs; } while (0);break; case 4: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }) = new_regs; } while (0);break; case 8: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(__irq_regs)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(__irq_regs))) *)(&(__irq_regs)); }); }) = new_regs; } while (0);break; default: __bad_size_call_parameter();break; } } while (0); });
 return old_regs;
}
# 2 "./arch/hexagon/include/generated/asm/irq_regs.h" 2
# 26 "../include/linux/irq.h" 2

struct seq_file;
struct module;
struct msi_msg;
struct irq_affinity_desc;
enum irqchip_irq_state;
# 77 "../include/linux/irq.h"
enum {
 IRQ_TYPE_NONE = 0x00000000,
 IRQ_TYPE_EDGE_RISING = 0x00000001,
 IRQ_TYPE_EDGE_FALLING = 0x00000002,
 IRQ_TYPE_EDGE_BOTH = (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING),
 IRQ_TYPE_LEVEL_HIGH = 0x00000004,
 IRQ_TYPE_LEVEL_LOW = 0x00000008,
 IRQ_TYPE_LEVEL_MASK = (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH),
 IRQ_TYPE_SENSE_MASK = 0x0000000f,
 IRQ_TYPE_DEFAULT = IRQ_TYPE_SENSE_MASK,

 IRQ_TYPE_PROBE = 0x00000010,

 IRQ_LEVEL = (1 << 8),
 IRQ_PER_CPU = (1 << 9),
 IRQ_NOPROBE = (1 << 10),
 IRQ_NOREQUEST = (1 << 11),
 IRQ_NOAUTOEN = (1 << 12),
 IRQ_NO_BALANCING = (1 << 13),
 IRQ_MOVE_PCNTXT = (1 << 14),
 IRQ_NESTED_THREAD = (1 << 15),
 IRQ_NOTHREAD = (1 << 16),
 IRQ_PER_CPU_DEVID = (1 << 17),
 IRQ_IS_POLLED = (1 << 18),
 IRQ_DISABLE_UNLAZY = (1 << 19),
 IRQ_HIDDEN = (1 << 20),
 IRQ_NO_DEBUG = (1 << 21),
};
# 123 "../include/linux/irq.h"
enum {
 IRQ_SET_MASK_OK = 0,
 IRQ_SET_MASK_OK_NOCOPY,
 IRQ_SET_MASK_OK_DONE,
};

struct msi_desc;
struct irq_domain;
# 147 "../include/linux/irq.h"
struct irq_common_data {
 unsigned int state_use_accessors;



 void *handler_data;
 struct msi_desc *msi_desc;
# 163 "../include/linux/irq.h"
};
# 179 "../include/linux/irq.h"
struct irq_data {
 u32 mask;
 unsigned int irq;
 irq_hw_number_t hwirq;
 struct irq_common_data *common;
 struct irq_chip *chip;
 struct irq_domain *domain;

 struct irq_data *parent_data;

 void *chip_data;
};
# 227 "../include/linux/irq.h"
enum {
 IRQD_TRIGGER_MASK = 0xf,
 IRQD_SETAFFINITY_PENDING = ((((1UL))) << (8)),
 IRQD_ACTIVATED = ((((1UL))) << (9)),
 IRQD_NO_BALANCING = ((((1UL))) << (10)),
 IRQD_PER_CPU = ((((1UL))) << (11)),
 IRQD_AFFINITY_SET = ((((1UL))) << (12)),
 IRQD_LEVEL = ((((1UL))) << (13)),
 IRQD_WAKEUP_STATE = ((((1UL))) << (14)),
 IRQD_MOVE_PCNTXT = ((((1UL))) << (15)),
 IRQD_IRQ_DISABLED = ((((1UL))) << (16)),
 IRQD_IRQ_MASKED = ((((1UL))) << (17)),
 IRQD_IRQ_INPROGRESS = ((((1UL))) << (18)),
 IRQD_WAKEUP_ARMED = ((((1UL))) << (19)),
 IRQD_FORWARDED_TO_VCPU = ((((1UL))) << (20)),
 IRQD_AFFINITY_MANAGED = ((((1UL))) << (21)),
 IRQD_IRQ_STARTED = ((((1UL))) << (22)),
 IRQD_MANAGED_SHUTDOWN = ((((1UL))) << (23)),
 IRQD_SINGLE_TARGET = ((((1UL))) << (24)),
 IRQD_DEFAULT_TRIGGER_SET = ((((1UL))) << (25)),
 IRQD_CAN_RESERVE = ((((1UL))) << (26)),
 IRQD_HANDLE_ENFORCE_IRQCTX = ((((1UL))) << (27)),
 IRQD_AFFINITY_ON_ACTIVATE = ((((1UL))) << (28)),
 IRQD_IRQ_ENABLED_ON_SUSPEND = ((((1UL))) << (29)),
 IRQD_RESEND_WHEN_IN_PROGRESS = ((((1UL))) << (30)),
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_setaffinity_pending(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_SETAFFINITY_PENDING;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_per_cpu(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_PER_CPU;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_can_balance(struct irq_data *d)
{
 return !((((d)->common)->state_use_accessors) & (IRQD_PER_CPU | IRQD_NO_BALANCING));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_affinity_was_set(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_AFFINITY_SET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_mark_affinity_was_set(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_AFFINITY_SET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_trigger_type_was_set(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_DEFAULT_TRIGGER_SET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 irqd_get_trigger_type(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_TRIGGER_MASK;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_trigger_type(struct irq_data *d, u32 type)
{
 (((d)->common)->state_use_accessors) &= ~IRQD_TRIGGER_MASK;
 (((d)->common)->state_use_accessors) |= type & IRQD_TRIGGER_MASK;
 (((d)->common)->state_use_accessors) |= IRQD_DEFAULT_TRIGGER_SET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_level_type(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_LEVEL;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_single_target(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_SINGLE_TARGET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_single_target(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_SINGLE_TARGET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_handle_enforce_irqctx(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_HANDLE_ENFORCE_IRQCTX;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_handle_enforce_irqctx(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_HANDLE_ENFORCE_IRQCTX;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_enabled_on_suspend(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_IRQ_ENABLED_ON_SUSPEND;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_wakeup_set(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_WAKEUP_STATE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_can_move_in_process_context(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_MOVE_PCNTXT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_irq_disabled(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_IRQ_DISABLED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_irq_masked(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_IRQ_MASKED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_irq_inprogress(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_IRQ_INPROGRESS;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_wakeup_armed(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_WAKEUP_ARMED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_forwarded_to_vcpu(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_FORWARDED_TO_VCPU;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_forwarded_to_vcpu(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_FORWARDED_TO_VCPU;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_clr_forwarded_to_vcpu(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) &= ~IRQD_FORWARDED_TO_VCPU;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_affinity_is_managed(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_AFFINITY_MANAGED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_activated(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_ACTIVATED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_activated(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_ACTIVATED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_clr_activated(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) &= ~IRQD_ACTIVATED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_started(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_IRQ_STARTED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_is_managed_and_shutdown(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_MANAGED_SHUTDOWN;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_can_reserve(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_CAN_RESERVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_clr_can_reserve(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) &= ~IRQD_CAN_RESERVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_can_reserve(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_CAN_RESERVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_affinity_on_activate(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_AFFINITY_ON_ACTIVATE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_affinity_on_activate(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_AFFINITY_ON_ACTIVATE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irqd_set_resend_when_in_progress(struct irq_data *d)
{
 (((d)->common)->state_use_accessors) |= IRQD_RESEND_WHEN_IN_PROGRESS;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irqd_needs_resend_when_in_progress(struct irq_data *d)
{
 return (((d)->common)->state_use_accessors) & IRQD_RESEND_WHEN_IN_PROGRESS;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
{
 return d->hwirq;
}
# 501 "../include/linux/irq.h"
struct irq_chip {
 const char *name;
 unsigned int (*irq_startup)(struct irq_data *data);
 void (*irq_shutdown)(struct irq_data *data);
 void (*irq_enable)(struct irq_data *data);
 void (*irq_disable)(struct irq_data *data);

 void (*irq_ack)(struct irq_data *data);
 void (*irq_mask)(struct irq_data *data);
 void (*irq_mask_ack)(struct irq_data *data);
 void (*irq_unmask)(struct irq_data *data);
 void (*irq_eoi)(struct irq_data *data);

 int (*irq_set_affinity)(struct irq_data *data, const struct cpumask *dest, bool force);
 int (*irq_retrigger)(struct irq_data *data);
 int (*irq_set_type)(struct irq_data *data, unsigned int flow_type);
 int (*irq_set_wake)(struct irq_data *data, unsigned int on);

 void (*irq_bus_lock)(struct irq_data *data);
 void (*irq_bus_sync_unlock)(struct irq_data *data);





 void (*irq_suspend)(struct irq_data *data);
 void (*irq_resume)(struct irq_data *data);
 void (*irq_pm_shutdown)(struct irq_data *data);

 void (*irq_calc_mask)(struct irq_data *data);

 void (*irq_print_chip)(struct irq_data *data, struct seq_file *p);
 int (*irq_request_resources)(struct irq_data *data);
 void (*irq_release_resources)(struct irq_data *data);

 void (*irq_compose_msi_msg)(struct irq_data *data, struct msi_msg *msg);
 void (*irq_write_msi_msg)(struct irq_data *data, struct msi_msg *msg);

 int (*irq_get_irqchip_state)(struct irq_data *data, enum irqchip_irq_state which, bool *state);
 int (*irq_set_irqchip_state)(struct irq_data *data, enum irqchip_irq_state which, bool state);

 int (*irq_set_vcpu_affinity)(struct irq_data *data, void *vcpu_info);

 void (*ipi_send_single)(struct irq_data *data, unsigned int cpu);
 void (*ipi_send_mask)(struct irq_data *data, const struct cpumask *dest);

 int (*irq_nmi_setup)(struct irq_data *data);
 void (*irq_nmi_teardown)(struct irq_data *data);

 unsigned long flags;
};
# 571 "../include/linux/irq.h"
enum {
 IRQCHIP_SET_TYPE_MASKED = (1 << 0),
 IRQCHIP_EOI_IF_HANDLED = (1 << 1),
 IRQCHIP_MASK_ON_SUSPEND = (1 << 2),
 IRQCHIP_ONOFFLINE_ENABLED = (1 << 3),
 IRQCHIP_SKIP_SET_WAKE = (1 << 4),
 IRQCHIP_ONESHOT_SAFE = (1 << 5),
 IRQCHIP_EOI_THREADED = (1 << 6),
 IRQCHIP_SUPPORTS_LEVEL_MSI = (1 << 7),
 IRQCHIP_SUPPORTS_NMI = (1 << 8),
 IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND = (1 << 9),
 IRQCHIP_AFFINITY_PRE_STARTUP = (1 << 10),
 IRQCHIP_IMMUTABLE = (1 << 11),
};

# 1 "../include/linux/irqdesc.h" 1
# 13 "../include/linux/irqdesc.h"
struct irq_affinity_notify;
struct proc_dir_entry;
struct module;
struct irq_desc;
struct irq_domain;
struct pt_regs;






struct irqstat {
 unsigned int cnt;



};
# 67 "../include/linux/irqdesc.h"
struct irq_desc {
 struct irq_common_data irq_common_data;
 struct irq_data irq_data;
 struct irqstat *kstat_irqs;
 irq_flow_handler_t handle_irq;
 struct irqaction *action;
 unsigned int status_use_accessors;
 unsigned int core_internal_state__do_not_mess_with_it;
 unsigned int depth;
 unsigned int wake_depth;
 unsigned int tot_count;
 unsigned int irq_count;
 unsigned long last_unhandled;
 unsigned int irqs_unhandled;
 atomic_t threads_handled;
 int threads_handled_last;
 raw_spinlock_t lock;
 struct cpumask *percpu_enabled;
 const struct cpumask *percpu_affinity;







 unsigned long threads_oneshot;
 atomic_t threads_active;
 wait_queue_head_t wait_for_threads;







 struct proc_dir_entry *dir;
# 113 "../include/linux/irqdesc.h"
 struct mutex request_mutex;
 int parent_irq;
 struct module *owner;
 const char *name;



} ;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_lock_sparse(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_unlock_sparse(void) { }
extern struct irq_desc irq_desc[512];


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int irq_desc_kstat_cpu(struct irq_desc *desc,
           unsigned int cpu)
{
 return desc->kstat_irqs ? (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(desc->kstat_irqs->cnt)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(desc->kstat_irqs->cnt))) *)(&(desc->kstat_irqs->cnt)); }); })) : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_desc *irq_data_to_desc(struct irq_data *data)
{
 return ({ void *__mptr = (void *)(data->common); _Static_assert(__builtin_types_compatible_p(typeof(*(data->common)), typeof(((struct irq_desc *)0)->irq_common_data)) || __builtin_types_compatible_p(typeof(*(data->common)), typeof(void)), "pointer type mismatch in container_of()"); ((struct irq_desc *)(__mptr - __builtin_offsetof(struct irq_desc, irq_common_data))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int irq_desc_get_irq(struct irq_desc *desc)
{
 return desc->irq_data.irq;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_data *irq_desc_get_irq_data(struct irq_desc *desc)
{
 return &desc->irq_data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_chip *irq_desc_get_chip(struct irq_desc *desc)
{
 return desc->irq_data.chip;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_desc_get_chip_data(struct irq_desc *desc)
{
 return desc->irq_data.chip_data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_desc_get_handler_data(struct irq_desc *desc)
{
 return desc->irq_common_data.handler_data;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void generic_handle_irq_desc(struct irq_desc *desc)
{
 desc->handle_irq(desc);
}

int handle_irq_desc(struct irq_desc *desc);
int generic_handle_irq(unsigned int irq);
int generic_handle_irq_safe(unsigned int irq);







int generic_handle_domain_irq(struct irq_domain *domain, unsigned int hwirq);
int generic_handle_domain_irq_safe(struct irq_domain *domain, unsigned int hwirq);
int generic_handle_domain_nmi(struct irq_domain *domain, unsigned int hwirq);



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_desc_has_action(struct irq_desc *desc)
{
 return desc && desc->action != ((void *)0);
}
# 207 "../include/linux/irqdesc.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_handler_locked(struct irq_data *data,
       irq_flow_handler_t handler)
{
 struct irq_desc *desc = irq_data_to_desc(data);

 desc->handle_irq = handler;
}
# 227 "../include/linux/irqdesc.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
irq_set_chip_handler_name_locked(struct irq_data *data,
     const struct irq_chip *chip,
     irq_flow_handler_t handler, const char *name)
{
 struct irq_desc *desc = irq_data_to_desc(data);

 desc->handle_irq = handler;
 desc->name = name;
 data->chip = (struct irq_chip *)chip;
}

bool irq_check_status_bit(unsigned int irq, unsigned int bitmask);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irq_balancing_disabled(unsigned int irq)
{
 return irq_check_status_bit(irq, (IRQ_PER_CPU | IRQ_NO_BALANCING));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irq_is_percpu(unsigned int irq)
{
 return irq_check_status_bit(irq, IRQ_PER_CPU);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool irq_is_percpu_devid(unsigned int irq)
{
 return irq_check_status_bit(irq, IRQ_PER_CPU_DEVID);
}

void __irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
        struct lock_class_key *request_class);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
        struct lock_class_key *request_class)
{
 if (1)
  __irq_set_lockdep_class(irq, lock_class, request_class);
}
# 587 "../include/linux/irq.h" 2




# 1 "./arch/hexagon/include/generated/asm/hw_irq.h" 1
# 1 "../include/asm-generic/hw_irq.h" 1
# 2 "./arch/hexagon/include/generated/asm/hw_irq.h" 2
# 592 "../include/linux/irq.h" 2
# 603 "../include/linux/irq.h"
struct irqaction;
extern int setup_percpu_irq(unsigned int irq, struct irqaction *new);
extern void remove_percpu_irq(unsigned int irq, struct irqaction *act);





extern int irq_set_affinity_locked(struct irq_data *data,
       const struct cpumask *cpumask, bool force);
extern int irq_set_vcpu_affinity(unsigned int irq, void *vcpu_info);
# 632 "../include/linux/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_move_irq(struct irq_data *data) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_move_masked_irq(struct irq_data *data) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_force_complete_move(struct irq_desc *desc) { }


extern int no_irq_affinity;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_set_parent(int irq, int parent_irq)
{
 return 0;
}






extern void handle_level_irq(struct irq_desc *desc);
extern void handle_fasteoi_irq(struct irq_desc *desc);
extern void handle_edge_irq(struct irq_desc *desc);
extern void handle_edge_eoi_irq(struct irq_desc *desc);
extern void handle_simple_irq(struct irq_desc *desc);
extern void handle_untracked_irq(struct irq_desc *desc);
extern void handle_percpu_irq(struct irq_desc *desc);
extern void handle_percpu_devid_irq(struct irq_desc *desc);
extern void handle_bad_irq(struct irq_desc *desc);
extern void handle_nested_irq(unsigned int irq);

extern void handle_fasteoi_nmi(struct irq_desc *desc);
extern void handle_percpu_devid_fasteoi_nmi(struct irq_desc *desc);

extern int irq_chip_compose_msi_msg(struct irq_data *data, struct msi_msg *msg);
extern int irq_chip_pm_get(struct irq_data *data);
extern int irq_chip_pm_put(struct irq_data *data);

extern void handle_fasteoi_ack_irq(struct irq_desc *desc);
extern void handle_fasteoi_mask_irq(struct irq_desc *desc);
extern int irq_chip_set_parent_state(struct irq_data *data,
         enum irqchip_irq_state which,
         bool val);
extern int irq_chip_get_parent_state(struct irq_data *data,
         enum irqchip_irq_state which,
         bool *state);
extern void irq_chip_enable_parent(struct irq_data *data);
extern void irq_chip_disable_parent(struct irq_data *data);
extern void irq_chip_ack_parent(struct irq_data *data);
extern int irq_chip_retrigger_hierarchy(struct irq_data *data);
extern void irq_chip_mask_parent(struct irq_data *data);
extern void irq_chip_mask_ack_parent(struct irq_data *data);
extern void irq_chip_unmask_parent(struct irq_data *data);
extern void irq_chip_eoi_parent(struct irq_data *data);
extern int irq_chip_set_affinity_parent(struct irq_data *data,
     const struct cpumask *dest,
     bool force);
extern int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on);
extern int irq_chip_set_vcpu_affinity_parent(struct irq_data *data,
          void *vcpu_info);
extern int irq_chip_set_type_parent(struct irq_data *data, unsigned int type);
extern int irq_chip_request_resources_parent(struct irq_data *data);
extern void irq_chip_release_resources_parent(struct irq_data *data);



extern void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret);



extern int noirqdebug_setup(char *str);


extern int can_request_irq(unsigned int irq, unsigned long irqflags);


extern struct irq_chip no_irq_chip;
extern struct irq_chip dummy_irq_chip;

extern void
irq_set_chip_and_handler_name(unsigned int irq, const struct irq_chip *chip,
         irq_flow_handler_t handle, const char *name);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_chip_and_handler(unsigned int irq,
         const struct irq_chip *chip,
         irq_flow_handler_t handle)
{
 irq_set_chip_and_handler_name(irq, chip, handle, ((void *)0));
}

extern int irq_set_percpu_devid(unsigned int irq);
extern int irq_set_percpu_devid_partition(unsigned int irq,
       const struct cpumask *affinity);
extern int irq_get_percpu_devid_partition(unsigned int irq,
       struct cpumask *affinity);

extern void
__irq_set_handler(unsigned int irq, irq_flow_handler_t handle, int is_chained,
    const char *name);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
irq_set_handler(unsigned int irq, irq_flow_handler_t handle)
{
 __irq_set_handler(irq, handle, 0, ((void *)0));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
irq_set_chained_handler(unsigned int irq, irq_flow_handler_t handle)
{
 __irq_set_handler(irq, handle, 1, ((void *)0));
}






void
irq_set_chained_handler_and_data(unsigned int irq, irq_flow_handler_t handle,
     void *data);

void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_status_flags(unsigned int irq, unsigned long set)
{
 irq_modify_status(irq, 0, set);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_clear_status_flags(unsigned int irq, unsigned long clr)
{
 irq_modify_status(irq, clr, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_noprobe(unsigned int irq)
{
 irq_modify_status(irq, 0, IRQ_NOPROBE);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_probe(unsigned int irq)
{
 irq_modify_status(irq, IRQ_NOPROBE, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_nothread(unsigned int irq)
{
 irq_modify_status(irq, 0, IRQ_NOTHREAD);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_thread(unsigned int irq)
{
 irq_modify_status(irq, IRQ_NOTHREAD, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_nested_thread(unsigned int irq, bool nest)
{
 if (nest)
  irq_set_status_flags(irq, IRQ_NESTED_THREAD);
 else
  irq_clear_status_flags(irq, IRQ_NESTED_THREAD);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_set_percpu_devid_flags(unsigned int irq)
{
 irq_set_status_flags(irq,
        IRQ_NOAUTOEN | IRQ_PER_CPU | IRQ_NOTHREAD |
        IRQ_NOPROBE | IRQ_PER_CPU_DEVID);
}


extern int irq_set_chip(unsigned int irq, const struct irq_chip *chip);
extern int irq_set_handler_data(unsigned int irq, void *data);
extern int irq_set_chip_data(unsigned int irq, void *data);
extern int irq_set_irq_type(unsigned int irq, unsigned int type);
extern int irq_set_msi_desc(unsigned int irq, struct msi_desc *entry);
extern int irq_set_msi_desc_off(unsigned int irq_base, unsigned int irq_offset,
    struct msi_desc *entry);
extern struct irq_data *irq_get_irq_data(unsigned int irq);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_chip *irq_get_chip(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);
 return d ? d->chip : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_chip *irq_data_get_irq_chip(struct irq_data *d)
{
 return d->chip;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_get_chip_data(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);
 return d ? d->chip_data : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_data_get_irq_chip_data(struct irq_data *d)
{
 return d->chip_data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_get_handler_data(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);
 return d ? d->common->handler_data : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *irq_data_get_irq_handler_data(struct irq_data *d)
{
 return d->common->handler_data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct msi_desc *irq_get_msi_desc(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);
 return d ? d->common->msi_desc : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct msi_desc *irq_data_get_msi_desc(struct irq_data *d)
{
 return d->common->msi_desc;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 irq_get_trigger_type(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);
 return d ? irqd_get_trigger_type(d) : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_common_data_get_node(struct irq_common_data *d)
{



 return 0;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_data_get_node(struct irq_data *d)
{
 return irq_common_data_get_node(d->common);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
const struct cpumask *irq_data_get_affinity_mask(struct irq_data *d)
{



 return (get_cpu_mask(0));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_data_update_affinity(struct irq_data *d,
         const struct cpumask *m)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cpumask *irq_get_affinity_mask(int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);

 return d ? irq_data_get_affinity_mask(d) : ((void *)0);
}
# 916 "../include/linux/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_data_update_effective_affinity(struct irq_data *d,
            const struct cpumask *m)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
const struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d)
{
 return irq_data_get_affinity_mask(d);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
const struct cpumask *irq_get_effective_affinity_mask(unsigned int irq)
{
 struct irq_data *d = irq_get_irq_data(irq);

 return d ? irq_data_get_effective_affinity_mask(d) : ((void *)0);
}

unsigned int arch_dynirq_lower_bound(unsigned int from);

int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,
        struct module *owner,
        const struct irq_affinity_desc *affinity);

int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
      unsigned int cnt, int node, struct module *owner,
      const struct irq_affinity_desc *affinity);
# 976 "../include/linux/irq.h"
void irq_free_descs(unsigned int irq, unsigned int cnt);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_free_desc(unsigned int irq)
{
 irq_free_descs(irq, 1);
}
# 996 "../include/linux/irq.h"
struct irq_chip_regs {
 unsigned long enable;
 unsigned long disable;
 unsigned long mask;
 unsigned long ack;
 unsigned long eoi;
 unsigned long type;
 unsigned long polarity;
};
# 1019 "../include/linux/irq.h"
struct irq_chip_type {
 struct irq_chip chip;
 struct irq_chip_regs regs;
 irq_flow_handler_t handler;
 u32 type;
 u32 mask_cache_priv;
 u32 *mask_cache;
};
# 1061 "../include/linux/irq.h"
struct irq_chip_generic {
 raw_spinlock_t lock;
 void *reg_base;
 u32 (*reg_readl)(void *addr);
 void (*reg_writel)(u32 val, void *addr);
 void (*suspend)(struct irq_chip_generic *gc);
 void (*resume)(struct irq_chip_generic *gc);
 unsigned int irq_base;
 unsigned int irq_cnt;
 u32 mask_cache;
 u32 type_cache;
 u32 polarity_cache;
 u32 wake_enabled;
 u32 wake_active;
 unsigned int num_ct;
 void *private;
 unsigned long installed;
 unsigned long unused;
 struct irq_domain *domain;
 struct list_head list;
 struct irq_chip_type chip_types[];
};
# 1094 "../include/linux/irq.h"
enum irq_gc_flags {
 IRQ_GC_INIT_MASK_CACHE = 1 << 0,
 IRQ_GC_INIT_NESTED_LOCK = 1 << 1,
 IRQ_GC_MASK_CACHE_PER_TYPE = 1 << 2,
 IRQ_GC_NO_MASK = 1 << 3,
 IRQ_GC_BE_IO = 1 << 4,
};
# 1112 "../include/linux/irq.h"
struct irq_domain_chip_generic {
 unsigned int irqs_per_chip;
 unsigned int num_chips;
 unsigned int irq_flags_to_clear;
 unsigned int irq_flags_to_set;
 enum irq_gc_flags gc_flags;
 void (*exit)(struct irq_chip_generic *gc);
 struct irq_chip_generic *gc[];
};
# 1137 "../include/linux/irq.h"
struct irq_domain_chip_generic_info {
 const char *name;
 irq_flow_handler_t handler;
 unsigned int irqs_per_chip;
 unsigned int num_ct;
 unsigned int irq_flags_to_clear;
 unsigned int irq_flags_to_set;
 enum irq_gc_flags gc_flags;
 int (*init)(struct irq_chip_generic *gc);
 void (*exit)(struct irq_chip_generic *gc);
};


void irq_gc_noop(struct irq_data *d);
void irq_gc_mask_disable_reg(struct irq_data *d);
void irq_gc_mask_set_bit(struct irq_data *d);
void irq_gc_mask_clr_bit(struct irq_data *d);
void irq_gc_unmask_enable_reg(struct irq_data *d);
void irq_gc_ack_set_bit(struct irq_data *d);
void irq_gc_ack_clr_bit(struct irq_data *d);
void irq_gc_mask_disable_and_ack_set(struct irq_data *d);
void irq_gc_eoi(struct irq_data *d);
int irq_gc_set_wake(struct irq_data *d, unsigned int on);


int irq_map_generic_chip(struct irq_domain *d, unsigned int virq,
    irq_hw_number_t hw_irq);
void irq_unmap_generic_chip(struct irq_domain *d, unsigned int virq);
struct irq_chip_generic *
irq_alloc_generic_chip(const char *name, int nr_ct, unsigned int irq_base,
         void *reg_base, irq_flow_handler_t handler);
void irq_setup_generic_chip(struct irq_chip_generic *gc, u32 msk,
       enum irq_gc_flags flags, unsigned int clr,
       unsigned int set);
int irq_setup_alt_chip(struct irq_data *d, unsigned int type);
void irq_remove_generic_chip(struct irq_chip_generic *gc, u32 msk,
        unsigned int clr, unsigned int set);

struct irq_chip_generic *
devm_irq_alloc_generic_chip(struct device *dev, const char *name, int num_ct,
       unsigned int irq_base, void *reg_base,
       irq_flow_handler_t handler);
int devm_irq_setup_generic_chip(struct device *dev, struct irq_chip_generic *gc,
    u32 msk, enum irq_gc_flags flags,
    unsigned int clr, unsigned int set);

struct irq_chip_generic *irq_get_domain_generic_chip(struct irq_domain *d, unsigned int hw_irq);


int irq_domain_alloc_generic_chips(struct irq_domain *d,
       const struct irq_domain_chip_generic_info *info);
void irq_domain_remove_generic_chips(struct irq_domain *d);
# 1199 "../include/linux/irq.h"
int __irq_alloc_domain_generic_chips(struct irq_domain *d, int irqs_per_chip,
         int num_ct, const char *name,
         irq_flow_handler_t handler,
         unsigned int clr, unsigned int set,
         enum irq_gc_flags flags);
# 1213 "../include/linux/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_free_generic_chip(struct irq_chip_generic *gc)
{
 kfree(gc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_destroy_generic_chip(struct irq_chip_generic *gc,
         u32 msk, unsigned int clr,
         unsigned int set)
{
 irq_remove_generic_chip(gc, msk, clr, set);
 irq_free_generic_chip(gc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_chip_type *irq_data_get_chip_type(struct irq_data *d)
{
 return ({ void *__mptr = (void *)(d->chip); _Static_assert(__builtin_types_compatible_p(typeof(*(d->chip)), typeof(((struct irq_chip_type *)0)->chip)) || __builtin_types_compatible_p(typeof(*(d->chip)), typeof(void)), "pointer type mismatch in container_of()"); ((struct irq_chip_type *)(__mptr - __builtin_offsetof(struct irq_chip_type, chip))); });
}
# 1244 "../include/linux/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_gc_lock(struct irq_chip_generic *gc) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_gc_unlock(struct irq_chip_generic *gc) { }
# 1258 "../include/linux/irq.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void irq_reg_writel(struct irq_chip_generic *gc,
      u32 val, int reg_offset)
{
 if (gc->reg_writel)
  gc->reg_writel(val, gc->reg_base + reg_offset);
 else
  writel(val, gc->reg_base + reg_offset);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 irq_reg_readl(struct irq_chip_generic *gc,
    int reg_offset)
{
 if (gc->reg_readl)
  return gc->reg_readl(gc->reg_base + reg_offset);
 else
  return readl(gc->reg_base + reg_offset);
}

struct irq_matrix;
struct irq_matrix *irq_alloc_matrix(unsigned int matrix_bits,
        unsigned int alloc_start,
        unsigned int alloc_end);
void irq_matrix_online(struct irq_matrix *m);
void irq_matrix_offline(struct irq_matrix *m);
void irq_matrix_assign_system(struct irq_matrix *m, unsigned int bit, bool replace);
int irq_matrix_reserve_managed(struct irq_matrix *m, const struct cpumask *msk);
void irq_matrix_remove_managed(struct irq_matrix *m, const struct cpumask *msk);
int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk,
    unsigned int *mapped_cpu);
void irq_matrix_reserve(struct irq_matrix *m);
void irq_matrix_remove_reserved(struct irq_matrix *m);
int irq_matrix_alloc(struct irq_matrix *m, const struct cpumask *msk,
       bool reserved, unsigned int *mapped_cpu);
void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
       unsigned int bit, bool managed);
void irq_matrix_assign(struct irq_matrix *m, unsigned int bit);
unsigned int irq_matrix_available(struct irq_matrix *m, bool cpudown);
unsigned int irq_matrix_allocated(struct irq_matrix *m);
unsigned int irq_matrix_reserved(struct irq_matrix *m);
void irq_matrix_debug_show(struct seq_file *sf, struct irq_matrix *m, int ind);



irq_hw_number_t ipi_get_hwirq(unsigned int irq, unsigned int cpu);
int __ipi_send_single(struct irq_desc *desc, unsigned int cpu);
int __ipi_send_mask(struct irq_desc *desc, const struct cpumask *dest);
int ipi_send_single(unsigned int virq, unsigned int cpu);
int ipi_send_mask(unsigned int virq, const struct cpumask *dest);

void ipi_mux_process(void);
int ipi_mux_create(unsigned int nr_ipi, void (*mux_send)(unsigned int cpu));
# 18 "../include/asm-generic/hardirq.h" 2


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ack_bad_irq(unsigned int irq)
{
 ({ do {} while (0); _printk("\001" "2" "unexpected IRQ trap at vector %02x\n", irq); });
}
# 2 "./arch/hexagon/include/generated/asm/hardirq.h" 2
# 12 "../include/linux/hardirq.h" 2

extern void synchronize_irq(unsigned int irq);
extern bool synchronize_hardirq(unsigned int irq);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __rcu_irq_enter_check_tick(void) { }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void rcu_irq_enter_check_tick(void)
{
 if (context_tracking_enabled())
  __rcu_irq_enter_check_tick();
}
# 55 "../include/linux/hardirq.h"
void irq_enter(void);



void irq_enter_rcu(void);
# 83 "../include/linux/hardirq.h"
void irq_exit(void);




void irq_exit_rcu(void);
# 13 "../include/linux/highmem.h" 2

# 1 "../include/linux/highmem-internal.h" 1
# 20 "../include/linux/highmem-internal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmap_local_fork(struct task_struct *tsk) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmap_assert_nomap(void) { }
# 157 "../include/linux/highmem-internal.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *kmap_to_page(void *addr)
{
 return (mem_map + ((((((unsigned long)(addr) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap(struct page *page)
{
 do { do { } while (0); } while (0);
 return lowmem_page_address(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kunmap_high(struct page *page) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmap_flush_unused(void) { }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kunmap(struct page *page)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_page(struct page *page)
{
 return lowmem_page_address(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_folio(struct folio *folio, size_t offset)
{
 return lowmem_page_address(&folio->page) + offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_page_prot(struct page *page, pgprot_t prot)
{
 return kmap_local_page(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_pfn(unsigned long pfn)
{
 return kmap_local_page((mem_map + ((pfn) - (__phys_offset >> 14))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kunmap_local(const void *addr)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_atomic(struct page *page)
{
 if (0)
  migrate_disable();
 else
  __asm__ __volatile__("": : :"memory");
 pagefault_disable();
 return lowmem_page_address(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_atomic_prot(struct page *page, pgprot_t prot)
{
 return kmap_atomic(page);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_atomic_pfn(unsigned long pfn)
{
 return kmap_atomic((mem_map + ((pfn) - (__phys_offset >> 14))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __kunmap_atomic(const void *addr)
{



 pagefault_enable();
 if (0)
  migrate_enable();
 else
  __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long nr_free_highpages(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long totalhigh_pages(void) { return 0; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_kmap_addr(const void *x)
{
 return false;
}
# 15 "../include/linux/highmem.h" 2
# 37 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap(struct page *page);
# 46 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kunmap(struct page *page);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *kmap_to_page(void *addr);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kmap_flush_unused(void);
# 96 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_page(struct page *page);
# 132 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_local_folio(struct folio *folio, size_t offset);
# 179 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kmap_atomic(struct page *page);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long nr_free_highpages(void);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long totalhigh_pages(void);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flush_kernel_vmap_range(void *vaddr, int size)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void invalidate_kernel_vmap_range(void *vaddr, int size)
{
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_user_highpage(struct page *page, unsigned long vaddr)
{
 void *addr = kmap_local_page(page);
 clear_page(addr);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_248(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((addr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((addr)), typeof(struct page *))))) __compiletime_assert_248(); } while (0); __kunmap_local(addr); } while (0);
}
# 223 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
       unsigned long vaddr)
{
 struct folio *folio;

 folio = ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(folio_alloc_noprof(((((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))) | (( gfp_t)((((1UL))) << (___GFP_HARDWALL_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_HIGHMEM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT))) | (( gfp_t)0)), 0)) _res = folio_alloc_noprof(((((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))) | (( gfp_t)((((1UL))) << (___GFP_HARDWALL_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_HIGHMEM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_MOVABLE_BIT))) | (( gfp_t)0)), 0); do {} while (0); _res; }); });
 if (folio)
  clear_user_highpage(&folio->page, vaddr);

 return folio;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_highpage(struct page *page)
{
 void *kaddr = kmap_local_page(page);
 clear_page(kaddr);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_249(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((kaddr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((kaddr)), typeof(struct page *))))) __compiletime_assert_249(); } while (0); __kunmap_local(kaddr); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void clear_highpage_kasan_tagged(struct page *page)
{
 void *kaddr = kmap_local_page(page);

 clear_page(kasan_reset_tag(kaddr));
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_250(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((kaddr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((kaddr)), typeof(struct page *))))) __compiletime_assert_250(); } while (0); __kunmap_local(kaddr); } while (0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tag_clear_highpage(struct page *page)
{
}
# 268 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zero_user_segments(struct page *page,
  unsigned start1, unsigned end1,
  unsigned start2, unsigned end2)
{
 void *kaddr = kmap_local_page(page);
 unsigned int i;

 do { if (__builtin_expect(!!(end1 > page_size(page) || end2 > page_size(page)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 275, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 if (end1 > start1)
  memset(kaddr + start1, 0, end1 - start1);

 if (end2 > start2)
  memset(kaddr + start2, 0, end2 - start2);

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_251(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((kaddr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((kaddr)), typeof(struct page *))))) __compiletime_assert_251(); } while (0); __kunmap_local(kaddr); } while (0);
 for (i = 0; i < compound_nr(page); i++)
  flush_dcache_page(page + i);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zero_user_segment(struct page *page,
 unsigned start, unsigned end)
{
 zero_user_segments(page, start, end, 0, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zero_user(struct page *page,
 unsigned start, unsigned size)
{
 zero_user_segments(page, start, start + size, 0, 0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_user_highpage(struct page *to, struct page *from,
 unsigned long vaddr, struct vm_area_struct *vma)
{
 char *vfrom, *vto;

 vfrom = kmap_local_page(from);
 vto = kmap_local_page(to);
 memcpy((vto), (vfrom), (1UL << 14));
 kmsan_unpoison_memory(lowmem_page_address(to), (1UL << 14));
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_252(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((vto), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((vto)), typeof(struct page *))))) __compiletime_assert_252(); } while (0); __kunmap_local(vto); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_253(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((vfrom), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((vfrom)), typeof(struct page *))))) __compiletime_assert_253(); } while (0); __kunmap_local(vfrom); } while (0);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_highpage(struct page *to, struct page *from)
{
 char *vfrom, *vto;

 vfrom = kmap_local_page(from);
 vto = kmap_local_page(to);
 memcpy((vto), (vfrom), (1UL << 14));
 kmsan_copy_page_meta(to, from);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_254(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((vto), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((vto)), typeof(struct page *))))) __compiletime_assert_254(); } while (0); __kunmap_local(vto); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_255(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((vfrom), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((vfrom)), typeof(struct page *))))) __compiletime_assert_255(); } while (0); __kunmap_local(vfrom); } while (0);
}
# 380 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_mc_user_highpage(struct page *to, struct page *from,
     unsigned long vaddr, struct vm_area_struct *vma)
{
 copy_user_highpage(to, from, vaddr, vma);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_mc_highpage(struct page *to, struct page *from)
{
 copy_highpage(to, from);
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_page(struct page *dst_page, size_t dst_off,
          struct page *src_page, size_t src_off,
          size_t len)
{
 char *dst = kmap_local_page(dst_page);
 char *src = kmap_local_page(src_page);

 do { if (__builtin_expect(!!(dst_off + len > (1UL << 14) || src_off + len > (1UL << 14)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 401, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 memcpy(dst + dst_off, src + src_off, len);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_256(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((src), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((src)), typeof(struct page *))))) __compiletime_assert_256(); } while (0); __kunmap_local(src); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_257(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((dst), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((dst)), typeof(struct page *))))) __compiletime_assert_257(); } while (0); __kunmap_local(dst); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memset_page(struct page *page, size_t offset, int val,
          size_t len)
{
 char *addr = kmap_local_page(page);

 do { if (__builtin_expect(!!(offset + len > (1UL << 14)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 412, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 memset(addr + offset, val, len);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_258(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((addr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((addr)), typeof(struct page *))))) __compiletime_assert_258(); } while (0); __kunmap_local(addr); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_from_page(char *to, struct page *page,
        size_t offset, size_t len)
{
 char *from = kmap_local_page(page);

 do { if (__builtin_expect(!!(offset + len > (1UL << 14)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 422, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 memcpy(to, from + offset, len);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_259(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((from), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((from)), typeof(struct page *))))) __compiletime_assert_259(); } while (0); __kunmap_local(from); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_to_page(struct page *page, size_t offset,
      const char *from, size_t len)
{
 char *to = kmap_local_page(page);

 do { if (__builtin_expect(!!(offset + len > (1UL << 14)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 432, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 memcpy(to + offset, from, len);
 flush_dcache_page(page);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_260(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((to), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((to)), typeof(struct page *))))) __compiletime_assert_260(); } while (0); __kunmap_local(to); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memzero_page(struct page *page, size_t offset, size_t len)
{
 char *addr = kmap_local_page(page);

 do { if (__builtin_expect(!!(offset + len > (1UL << 14)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 442, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 memset(addr + offset, 0, len);
 flush_dcache_page(page);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_261(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((addr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((addr)), typeof(struct page *))))) __compiletime_assert_261(); } while (0); __kunmap_local(addr); } while (0);
}
# 455 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_from_folio(char *to, struct folio *folio,
  size_t offset, size_t len)
{
 do { if (__builtin_expect(!!(offset + len > folio_size(folio)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 458, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 do {
  const char *from = kmap_local_folio(folio, offset);
  size_t chunk = len;

  if (folio_test_highmem(folio) &&
      chunk > (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1))))
   chunk = (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1)));
  memcpy(to, from, chunk);
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_262(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((from), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((from)), typeof(struct page *))))) __compiletime_assert_262(); } while (0); __kunmap_local(from); } while (0);

  to += chunk;
  offset += chunk;
  len -= chunk;
 } while (len > 0);
}
# 483 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_to_folio(struct folio *folio, size_t offset,
  const char *from, size_t len)
{
 do { if (__builtin_expect(!!(offset + len > folio_size(folio)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 486, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 do {
  char *to = kmap_local_folio(folio, offset);
  size_t chunk = len;

  if (folio_test_highmem(folio) &&
      chunk > (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1))))
   chunk = (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1)));
  memcpy(to, from, chunk);
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_263(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((to), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((to)), typeof(struct page *))))) __compiletime_assert_263(); } while (0); __kunmap_local(to); } while (0);

  from += chunk;
  offset += chunk;
  len -= chunk;
 } while (len > 0);

 flush_dcache_folio(folio);
}
# 520 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) void *folio_zero_tail(struct folio *folio,
  size_t offset, void *kaddr)
{
 size_t len = folio_size(folio) - offset;

 if (folio_test_highmem(folio)) {
  size_t max = (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1)));

  while (len > max) {
   memset(kaddr, 0, max);
   do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_264(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((kaddr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((kaddr)), typeof(struct page *))))) __compiletime_assert_264(); } while (0); __kunmap_local(kaddr); } while (0);
   len -= max;
   offset += max;
   max = (1UL << 14);
   kaddr = kmap_local_folio(folio, offset);
  }
 }

 memset(kaddr, 0, len);
 flush_dcache_folio(folio);

 return kaddr;
}
# 556 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_fill_tail(struct folio *folio, size_t offset,
  const char *from, size_t len)
{
 char *to = kmap_local_folio(folio, offset);

 do { if (__builtin_expect(!!(offset + len > folio_size(folio)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/highmem.h", 561, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 if (folio_test_highmem(folio)) {
  size_t max = (1UL << 14) - ((unsigned long)(offset) & ~(~((1 << 14) - 1)));

  while (len > max) {
   memcpy(to, from, max);
   do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_265(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((to), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((to)), typeof(struct page *))))) __compiletime_assert_265(); } while (0); __kunmap_local(to); } while (0);
   len -= max;
   from += max;
   offset += max;
   max = (1UL << 14);
   to = kmap_local_folio(folio, offset);
  }
 }

 memcpy(to, from, len);
 to = folio_zero_tail(folio, offset + len, to + len);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_266(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((to), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((to)), typeof(struct page *))))) __compiletime_assert_266(); } while (0); __kunmap_local(to); } while (0);
}
# 594 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t memcpy_from_file_folio(char *to, struct folio *folio,
  loff_t pos, size_t len)
{
 size_t offset = ((unsigned long)(pos) & (folio_size(folio) - 1));
 char *from = kmap_local_folio(folio, offset);

 if (folio_test_highmem(folio)) {
  offset = ((unsigned long)(offset) & ~(~((1 << 14) - 1)));
  len = ({ size_t __UNIQUE_ID_x_267 = (len); size_t __UNIQUE_ID_y_268 = ((1UL << 14) - offset); ((__UNIQUE_ID_x_267) < (__UNIQUE_ID_y_268) ? (__UNIQUE_ID_x_267) : (__UNIQUE_ID_y_268)); });
 } else
  len = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((len) - (folio_size(folio) - offset)) * 0l)) : (int *)8))), ((len) < (folio_size(folio) - offset) ? (len) : (folio_size(folio) - offset)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(len))(-1)) < ( typeof(len))1)) * 0l)) : (int *)8))), (((typeof(len))(-1)) < ( typeof(len))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(folio_size(folio) - offset))(-1)) < ( typeof(folio_size(folio) - offset))1)) * 0l)) : (int *)8))), (((typeof(folio_size(folio) - offset))(-1)) < ( typeof(folio_size(folio) - offset))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((len) + 0))(-1)) < ( typeof((len) + 0))1)) * 0l)) : (int *)8))), (((typeof((len) + 0))(-1)) < ( typeof((len) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((folio_size(folio) - offset) + 0))(-1)) < ( typeof((folio_size(folio) - offset) + 0))1)) * 0l)) : (int *)8))), (((typeof((folio_size(folio) - offset) + 0))(-1)) < ( typeof((folio_size(folio) - offset) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(len) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(len))(-1)) < ( typeof(len))1)) * 0l)) : (int *)8))), (((typeof(len))(-1)) < ( typeof(len))1), 0), len, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(folio_size(folio) - offset) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(folio_size(folio) - offset))(-1)) < ( typeof(folio_size(folio) - offset))1)) * 0l)) : (int *)8))), (((typeof(folio_size(folio) - offset))(-1)) < ( typeof(folio_size(folio) - offset))1), 0), folio_size(folio) - offset, -1) >= 0)), "min" "(" "len" ", " "folio_size(folio) - offset" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_269 = (len); __auto_type __UNIQUE_ID_y_270 = (folio_size(folio) - offset); ((__UNIQUE_ID_x_269) < (__UNIQUE_ID_y_270) ? (__UNIQUE_ID_x_269) : (__UNIQUE_ID_y_270)); }); }));

 memcpy(to, from, len);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_271(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((from), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((from)), typeof(struct page *))))) __compiletime_assert_271(); } while (0); __kunmap_local(from); } while (0);

 return len;
}
# 620 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_zero_segments(struct folio *folio,
  size_t start1, size_t xend1, size_t start2, size_t xend2)
{
 zero_user_segments(&folio->page, start1, xend1, start2, xend2);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_zero_segment(struct folio *folio,
  size_t start, size_t xend)
{
 zero_user_segments(&folio->page, start, xend, 0, 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_zero_range(struct folio *folio,
  size_t start, size_t length)
{
 zero_user_segments(&folio->page, start, start + length, 0, 0);
}
# 659 "../include/linux/highmem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_release_kmap(struct folio *folio, void *addr)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_272(void) __attribute__((__error__("BUILD_BUG_ON failed: " "__same_type((addr), struct page *)"))); if (!(!(__builtin_types_compatible_p(typeof((addr)), typeof(struct page *))))) __compiletime_assert_272(); } while (0); __kunmap_local(addr); } while (0);
 folio_put(folio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unmap_and_put_page(struct page *page, void *addr)
{
 folio_release_kmap((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))), addr);
}
# 11 "../include/linux/bvec.h" 2






struct page;
# 31 "../include/linux/bvec.h"
struct bio_vec {
 struct page *bv_page;
 unsigned int bv_len;
 unsigned int bv_offset;
};
# 44 "../include/linux/bvec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bvec_set_page(struct bio_vec *bv, struct page *page,
  unsigned int len, unsigned int offset)
{
 bv->bv_page = page;
 bv->bv_len = len;
 bv->bv_offset = offset;
}
# 59 "../include/linux/bvec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bvec_set_folio(struct bio_vec *bv, struct folio *folio,
  unsigned int len, unsigned int offset)
{
 bvec_set_page(bv, &folio->page, len, offset);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bvec_set_virt(struct bio_vec *bv, void *vaddr,
  unsigned int len)
{
 bvec_set_page(bv, (mem_map + ((((((unsigned long)(vaddr) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14))), len, ((unsigned long)(vaddr) & ~(~((1 << 14) - 1))));
}

struct bvec_iter {
 sector_t bi_sector;

 unsigned int bi_size;

 unsigned int bi_idx;

 unsigned int bi_bvec_done;

} __attribute__((__packed__)) __attribute__((__aligned__(4)));

struct bvec_iter_all {
 struct bio_vec bv;
 int idx;
 unsigned done;
};
# 140 "../include/linux/bvec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bvec_iter_advance(const struct bio_vec *bv,
  struct bvec_iter *iter, unsigned bytes)
{
 unsigned int idx = iter->bi_idx;

 if (({ bool __ret_do_once = !!(bytes > iter->bi_size); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bvec.h", 146, 9, "Attempted to advance past end of bvec iter\n"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); })) {

  iter->bi_size = 0;
  return false;
 }

 iter->bi_size -= bytes;
 bytes += iter->bi_bvec_done;

 while (bytes && bytes >= bv[idx].bv_len) {
  bytes -= bv[idx].bv_len;
  idx++;
 }

 iter->bi_idx = idx;
 iter->bi_bvec_done = bytes;
 return true;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bvec_iter_advance_single(const struct bio_vec *bv,
    struct bvec_iter *iter, unsigned int bytes)
{
 unsigned int done = iter->bi_bvec_done + bytes;

 if (done == bv[iter->bi_idx].bv_len) {
  done = 0;
  iter->bi_idx++;
 }
 iter->bi_bvec_done = done;
 iter->bi_size -= bytes;
}
# 196 "../include/linux/bvec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct bio_vec *bvec_init_iter_all(struct bvec_iter_all *iter_all)
{
 iter_all->done = 0;
 iter_all->idx = 0;

 return &iter_all->bv;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bvec_advance(const struct bio_vec *bvec,
    struct bvec_iter_all *iter_all)
{
 struct bio_vec *bv = &iter_all->bv;

 if (iter_all->done) {
  bv->bv_page++;
  bv->bv_offset = 0;
 } else {
  bv->bv_page = bvec->bv_page + (bvec->bv_offset >> 14);
  bv->bv_offset = bvec->bv_offset & ~(~((1 << 14) - 1));
 }
 bv->bv_len = ({ unsigned int __UNIQUE_ID_x_273 = ((1UL << 14) - bv->bv_offset); unsigned int __UNIQUE_ID_y_274 = (bvec->bv_len - iter_all->done); ((__UNIQUE_ID_x_273) < (__UNIQUE_ID_y_274) ? (__UNIQUE_ID_x_273) : (__UNIQUE_ID_y_274)); });

 iter_all->done += bv->bv_len;

 if (iter_all->done == bvec->bv_len) {
  iter_all->idx++;
  iter_all->done = 0;
 }
}
# 233 "../include/linux/bvec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *bvec_kmap_local(struct bio_vec *bvec)
{
 return kmap_local_page(bvec->bv_page) + bvec->bv_offset;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_from_bvec(char *to, struct bio_vec *bvec)
{
 memcpy_from_page(to, bvec->bv_page, bvec->bv_offset, bvec->bv_len);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcpy_to_bvec(struct bio_vec *bvec, const char *from)
{
 memcpy_to_page(bvec->bv_page, bvec->bv_offset, from, bvec->bv_len);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memzero_bvec(struct bio_vec *bvec)
{
 memzero_page(bvec->bv_page, bvec->bv_offset, bvec->bv_len);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *bvec_virt(struct bio_vec *bvec)
{
 ({ bool __ret_do_once = !!(PageHighMem(bvec->bv_page)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bvec.h", 279, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return lowmem_page_address(bvec->bv_page) + bvec->bv_offset;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) phys_addr_t bvec_phys(const struct bio_vec *bvec)
{





 return ((phys_addr_t)(((unsigned long)((bvec->bv_page) - mem_map) + (__phys_offset >> 14))) << 14) + bvec->bv_offset;
}
# 18 "../include/linux/skbuff.h" 2






# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 25 "../include/linux/skbuff.h" 2

# 1 "../include/net/checksum.h" 1
# 19 "../include/net/checksum.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 20 "../include/net/checksum.h" 2

# 1 "../arch/hexagon/include/asm/checksum.h" 1
# 10 "../arch/hexagon/include/asm/checksum.h"
unsigned int do_csum(const void *voidptr, int len);






__wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
     __u32 len, __u8 proto, __wsum sum);


__sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
     __u32 len, __u8 proto, __wsum sum);

# 1 "../include/asm-generic/checksum.h" 1
# 19 "../include/asm-generic/checksum.h"
extern __wsum csum_partial(const void *buff, int len, __wsum sum);






extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __sum16 csum_fold(__wsum csum)
{
 u32 sum = ( u32)csum;
 return ( __sum16)((~sum - ror32(sum, 16)) >> 16);
}
# 63 "../include/asm-generic/checksum.h"
extern __sum16 ip_compute_csum(const void *buff, int len);
# 25 "../arch/hexagon/include/asm/checksum.h" 2
# 22 "../include/net/checksum.h" 2





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
__wsum csum_and_copy_from_user (const void *src, void *dst,
          int len)
{
 if (copy_from_user(dst, src, len))
  return 0;
 return csum_partial(dst, len, ~0U);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum csum_and_copy_to_user
(const void *src, void *dst, int len)
{
 __wsum sum = csum_partial(src, len, ~0U);

 if (copy_to_user(dst, src, len) == 0)
  return sum;
 return 0;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum
csum_partial_copy_nocheck(const void *src, void *dst, int len)
{
 memcpy(dst, src, len);
 return csum_partial(dst, len, 0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum csum_add(__wsum csum, __wsum addend)
{
 u32 res = ( u32)csum;
 res += ( u32)addend;
 return ( __wsum)(res + (res < ( u32)addend));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum csum_sub(__wsum csum, __wsum addend)
{
 return csum_add(csum, ~addend);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __sum16 csum16_add(__sum16 csum, __be16 addend)
{
 u16 res = ( u16)csum;

 res += ( u16)addend;
 return ( __sum16)(res + (res < ( u16)addend));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __sum16 csum16_sub(__sum16 csum, __be16 addend)
{
 return csum16_add(csum, ~addend);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum csum_shift(__wsum sum, int offset)
{

 if (offset & 1)
  return ( __wsum)ror32(( u32)sum, 8);
 return sum;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum
csum_block_add(__wsum csum, __wsum csum2, int offset)
{
 return csum_add(csum, csum_shift(csum2, offset));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum
csum_block_add_ext(__wsum csum, __wsum csum2, int offset, int len)
{
 return csum_block_add(csum, csum2, offset);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum
csum_block_sub(__wsum csum, __wsum csum2, int offset)
{
 return csum_block_add(csum, ~csum2, offset);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum csum_unfold(__sum16 n)
{
 return ( __wsum)n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
__wsum csum_partial_ext(const void *buff, int len, __wsum sum)
{
 return csum_partial(buff, len, sum);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void csum_replace_by_diff(__sum16 *sum, __wsum diff)
{
 *sum = csum_fold(csum_add(diff, ~csum_unfold(*sum)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
{
 __wsum tmp = csum_sub(~csum_unfold(*sum), ( __wsum)from);

 *sum = csum_fold(csum_add(tmp, ( __wsum)to));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
{
 *sum = ~csum16_add(csum16_sub(~(*sum), old), new);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void csum_replace(__wsum *csum, __wsum old, __wsum new)
{
 *csum = csum_add(csum_sub(*csum, old), new);
}

struct sk_buff;
void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb,
         __be32 from, __be32 to, bool pseudohdr);
void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
          const __be32 *from, const __be32 *to,
          bool pseudohdr);
void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
         __wsum diff, bool pseudohdr);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
         __be16 from, __be16 to, bool pseudohdr)
{
 inet_proto_csum_replace4(sum, skb, ( __be32)from,
     ( __be32)to, pseudohdr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum remcsum_adjust(void *ptr, __wsum csum,
          int start, int offset)
{
 __sum16 *psum = (__sum16 *)(ptr + offset);
 __wsum delta;


 csum = csum_sub(csum, csum_partial(ptr, start, 0));


 delta = csum_sub(( __wsum)csum_fold(csum),
    ( __wsum)*psum);
 *psum = csum_fold(csum);

 return delta;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void remcsum_unadjust(__sum16 *psum, __wsum delta)
{
 *psum = csum_fold(csum_sub(delta, ( __wsum)*psum));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __wsum wsum_negate(__wsum val)
{
 return ( __wsum)-(( u32)val);
}
# 27 "../include/linux/skbuff.h" 2

# 1 "../include/linux/dma-mapping.h" 1
# 11 "../include/linux/dma-mapping.h"
# 1 "../include/linux/scatterlist.h" 1
# 11 "../include/linux/scatterlist.h"
struct scatterlist {
 unsigned long page_link;
 unsigned int offset;
 unsigned int length;
 dma_addr_t dma_address;

 unsigned int dma_length;




};
# 39 "../include/linux/scatterlist.h"
struct sg_table {
 struct scatterlist *sgl;
 unsigned int nents;
 unsigned int orig_nents;
};

struct sg_append_table {
 struct sg_table sgt;
 struct scatterlist *prv;
 unsigned int total_nents;
};
# 77 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __sg_flags(struct scatterlist *sg)
{
 return sg->page_link & (0x01UL | 0x02UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct scatterlist *sg_chain_ptr(struct scatterlist *sg)
{
 return (struct scatterlist *)(sg->page_link & ~(0x01UL | 0x02UL));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sg_is_chain(struct scatterlist *sg)
{
 return __sg_flags(sg) & 0x01UL;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sg_is_last(struct scatterlist *sg)
{
 return __sg_flags(sg) & 0x02UL;
}
# 107 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_assign_page(struct scatterlist *sg, struct page *page)
{
 unsigned long page_link = sg->page_link & (0x01UL | 0x02UL);





 do { if (__builtin_expect(!!((unsigned long)page & (0x01UL | 0x02UL)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/scatterlist.h", 115, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);



 sg->page_link = page_link | (unsigned long) page;
}
# 136 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_set_page(struct scatterlist *sg, struct page *page,
          unsigned int len, unsigned int offset)
{
 sg_assign_page(sg, page);
 sg->offset = offset;
 sg->length = len;
}
# 158 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_set_folio(struct scatterlist *sg, struct folio *folio,
          size_t len, size_t offset)
{
 ({ bool __ret_do_once = !!(len > (~0U)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/scatterlist.h", 161, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 ({ bool __ret_do_once = !!(offset > (~0U)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/scatterlist.h", 162, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 sg_assign_page(sg, &folio->page);
 sg->offset = offset;
 sg->length = len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *sg_page(struct scatterlist *sg)
{



 return (struct page *)((sg)->page_link & ~(0x01UL | 0x02UL));
}
# 183 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_set_buf(struct scatterlist *sg, const void *buf,
         unsigned int buflen)
{



 sg_set_page(sg, (mem_map + ((((((unsigned long)(buf) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14))), buflen, ((unsigned long)(buf) & ~(~((1 << 14) - 1))));
}
# 212 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sg_chain(struct scatterlist *chain_sg,
         struct scatterlist *sgl)
{



 chain_sg->offset = 0;
 chain_sg->length = 0;





 chain_sg->page_link = ((unsigned long) sgl | 0x01UL) & ~0x02UL;
}
# 238 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
       struct scatterlist *sgl)
{
 __sg_chain(&prv[prv_nents - 1], sgl);
}
# 253 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_mark_end(struct scatterlist *sg)
{



 sg->page_link |= 0x02UL;
 sg->page_link &= ~0x01UL;
}
# 270 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_unmark_end(struct scatterlist *sg)
{
 sg->page_link &= ~0x02UL;
}
# 357 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sg_dma_is_bus_address(struct scatterlist *sg)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_dma_mark_bus_address(struct scatterlist *sg)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_dma_unmark_bus_address(struct scatterlist *sg)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sg_dma_is_swiotlb(struct scatterlist *sg)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_dma_mark_swiotlb(struct scatterlist *sg)
{
}
# 387 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dma_addr_t sg_phys(struct scatterlist *sg)
{
 return (((unsigned long)((sg_page(sg)) - mem_map) + (__phys_offset >> 14)) << 14) + sg->offset;
}
# 402 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *sg_virt(struct scatterlist *sg)
{
 return lowmem_page_address(sg_page(sg)) + sg->offset;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sg_init_marker(struct scatterlist *sgl,
      unsigned int nents)
{
 sg_mark_end(&sgl[nents - 1]);
}

int sg_nents(struct scatterlist *sg);
int sg_nents_for_len(struct scatterlist *sg, u64 len);
struct scatterlist *sg_next(struct scatterlist *);
struct scatterlist *sg_last(struct scatterlist *s, unsigned int);
void sg_init_table(struct scatterlist *, unsigned int);
void sg_init_one(struct scatterlist *, const void *, unsigned int);
int sg_split(struct scatterlist *in, const int in_mapped_nents,
      const off_t skip, const int nb_splits,
      const size_t *split_sizes,
      struct scatterlist **out, int *out_mapped_nents,
      gfp_t gfp_mask);

typedef struct scatterlist *(sg_alloc_fn)(unsigned int, gfp_t);
typedef void (sg_free_fn)(struct scatterlist *, unsigned int);

void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
       sg_free_fn *, unsigned int);
void sg_free_table(struct sg_table *);
void sg_free_append_table(struct sg_append_table *sgt);
int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
       struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
int sg_alloc_append_table_from_pages(struct sg_append_table *sgt,
         struct page **pages, unsigned int n_pages,
         unsigned int offset, unsigned long size,
         unsigned int max_segment,
         unsigned int left_pages, gfp_t gfp_mask);
int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages,
          unsigned int n_pages, unsigned int offset,
          unsigned long size,
          unsigned int max_segment, gfp_t gfp_mask);
# 471 "../include/linux/scatterlist.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sg_alloc_table_from_pages(struct sg_table *sgt,
         struct page **pages,
         unsigned int n_pages,
         unsigned int offset,
         unsigned long size, gfp_t gfp_mask)
{
 return sg_alloc_table_from_pages_segment(sgt, pages, n_pages, offset,
       size, (~0U), gfp_mask);
}


struct scatterlist *sgl_alloc_order(unsigned long long length,
        unsigned int order, bool chainable,
        gfp_t gfp, unsigned int *nent_p);
struct scatterlist *sgl_alloc(unsigned long long length, gfp_t gfp,
         unsigned int *nent_p);
void sgl_free_n_order(struct scatterlist *sgl, int nents, int order);
void sgl_free_order(struct scatterlist *sgl, int order);
void sgl_free(struct scatterlist *sgl);


size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
        size_t buflen, off_t skip, bool to_buffer);

size_t sg_copy_from_buffer(struct scatterlist *sgl, unsigned int nents,
      const void *buf, size_t buflen);
size_t sg_copy_to_buffer(struct scatterlist *sgl, unsigned int nents,
    void *buf, size_t buflen);

size_t sg_pcopy_from_buffer(struct scatterlist *sgl, unsigned int nents,
       const void *buf, size_t buflen, off_t skip);
size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents,
     void *buf, size_t buflen, off_t skip);
size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents,
         size_t buflen, off_t skip);
# 550 "../include/linux/scatterlist.h"
struct sg_page_iter {
 struct scatterlist *sg;
 unsigned int sg_pgoffset;


 unsigned int __nents;
 int __pg_advance;

};
# 567 "../include/linux/scatterlist.h"
struct sg_dma_page_iter {
 struct sg_page_iter base;
};

bool __sg_page_iter_next(struct sg_page_iter *piter);
bool __sg_page_iter_dma_next(struct sg_dma_page_iter *dma_iter);
void __sg_page_iter_start(struct sg_page_iter *piter,
     struct scatterlist *sglist, unsigned int nents,
     unsigned long pgoffset);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *sg_page_iter_page(struct sg_page_iter *piter)
{
 return ((sg_page(piter->sg)) + (piter->sg_pgoffset));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dma_addr_t
sg_page_iter_dma_address(struct sg_dma_page_iter *dma_iter)
{
 return ((dma_iter->base.sg)->dma_address) +
        (dma_iter->base.sg_pgoffset << 14);
}
# 675 "../include/linux/scatterlist.h"
struct sg_mapping_iter {

 struct page *page;
 void *addr;
 size_t length;
 size_t consumed;
 struct sg_page_iter piter;


 unsigned int __offset;
 unsigned int __remaining;
 unsigned int __flags;
};

void sg_miter_start(struct sg_mapping_iter *miter, struct scatterlist *sgl,
      unsigned int nents, unsigned int flags);
bool sg_miter_skip(struct sg_mapping_iter *miter, off_t offset);
bool sg_miter_next(struct sg_mapping_iter *miter);
void sg_miter_stop(struct sg_mapping_iter *miter);
# 12 "../include/linux/dma-mapping.h" 2

# 1 "../include/linux/mem_encrypt.h" 1
# 14 "../include/linux/dma-mapping.h" 2
# 84 "../include/linux/dma-mapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void debug_dma_mapping_error(struct device *dev,
  dma_addr_t dma_addr)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void debug_dma_map_single(struct device *dev, const void *addr,
  unsigned long len)
{
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
 debug_dma_mapping_error(dev, dma_addr);

 if (__builtin_expect(!!(dma_addr == (~(dma_addr_t)0)), 0))
  return -12;
 return 0;
}

dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
  size_t offset, size_t size, enum dma_data_direction dir,
  unsigned long attrs);
void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
  enum dma_data_direction dir, unsigned long attrs);
unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
  int nents, enum dma_data_direction dir, unsigned long attrs);
void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
          int nents, enum dma_data_direction dir,
          unsigned long attrs);
int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
  enum dma_data_direction dir, unsigned long attrs);
dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr,
  size_t size, enum dma_data_direction dir, unsigned long attrs);
void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
  enum dma_data_direction dir, unsigned long attrs);
void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
  gfp_t flag, unsigned long attrs);
void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
  dma_addr_t dma_handle, unsigned long attrs);
void *dmam_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
  gfp_t gfp, unsigned long attrs);
void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
  dma_addr_t dma_handle);
int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
  void *cpu_addr, dma_addr_t dma_addr, size_t size,
  unsigned long attrs);
int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
  void *cpu_addr, dma_addr_t dma_addr, size_t size,
  unsigned long attrs);
bool dma_can_mmap(struct device *dev);
bool dma_pci_p2pdma_supported(struct device *dev);
int dma_set_mask(struct device *dev, u64 mask);
int dma_set_coherent_mask(struct device *dev, u64 mask);
u64 dma_get_required_mask(struct device *dev);
bool dma_addressing_limited(struct device *dev);
size_t dma_max_mapping_size(struct device *dev);
size_t dma_opt_mapping_size(struct device *dev);
unsigned long dma_get_merge_boundary(struct device *dev);
struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
  enum dma_data_direction dir, gfp_t gfp, unsigned long attrs);
void dma_free_noncontiguous(struct device *dev, size_t size,
  struct sg_table *sgt, enum dma_data_direction dir);
void *dma_vmap_noncontiguous(struct device *dev, size_t size,
  struct sg_table *sgt);
void dma_vunmap_noncontiguous(struct device *dev, void *vaddr);
int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
  size_t size, struct sg_table *sgt);
# 285 "../include/linux/dma-mapping.h"
void __dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size,
  enum dma_data_direction dir);
void __dma_sync_single_for_device(struct device *dev, dma_addr_t addr,
  size_t size, enum dma_data_direction dir);
void __dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
  int nelems, enum dma_data_direction dir);
void __dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
  int nelems, enum dma_data_direction dir);
bool __dma_need_sync(struct device *dev, dma_addr_t dma_addr);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dma_dev_need_sync(const struct device *dev)
{

 return !dev->dma_skip_sync || 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr,
  size_t size, enum dma_data_direction dir)
{
 if (dma_dev_need_sync(dev))
  __dma_sync_single_for_cpu(dev, addr, size, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_single_for_device(struct device *dev,
  dma_addr_t addr, size_t size, enum dma_data_direction dir)
{
 if (dma_dev_need_sync(dev))
  __dma_sync_single_for_device(dev, addr, size, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_sg_for_cpu(struct device *dev,
  struct scatterlist *sg, int nelems, enum dma_data_direction dir)
{
 if (dma_dev_need_sync(dev))
  __dma_sync_sg_for_cpu(dev, sg, nelems, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_sg_for_device(struct device *dev,
  struct scatterlist *sg, int nelems, enum dma_data_direction dir)
{
 if (dma_dev_need_sync(dev))
  __dma_sync_sg_for_device(dev, sg, nelems, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
 return dma_dev_need_sync(dev) ? __dma_need_sync(dev, dma_addr) : false;
}
# 360 "../include/linux/dma-mapping.h"
struct page *dma_alloc_pages(struct device *dev, size_t size,
  dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp);
void dma_free_pages(struct device *dev, size_t size, struct page *page,
  dma_addr_t dma_handle, enum dma_data_direction dir);
int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
  size_t size, struct page *page);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dma_alloc_noncoherent(struct device *dev, size_t size,
  dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
{
 struct page *page = dma_alloc_pages(dev, size, dma_handle, dir, gfp);
 return page ? lowmem_page_address(page) : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_free_noncoherent(struct device *dev, size_t size,
  void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir)
{
 dma_free_pages(dev, size, (mem_map + ((((((unsigned long)(vaddr) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14))), dma_handle, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
  size_t size, enum dma_data_direction dir, unsigned long attrs)
{

 if (({ bool __ret_do_once = !!(is_vmalloc_addr(ptr)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/dma-mapping.h", 385, 9, "%s %s: " "rejecting DMA map of vmalloc memory\n", dev_driver_string(dev), dev_name(dev)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))

  return (~(dma_addr_t)0);
 debug_dma_map_single(dev, ptr, size);
 return dma_map_page_attrs(dev, (mem_map + ((((((unsigned long)(ptr) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14))), ((unsigned long)(ptr) & ~(~((1 << 14) - 1))),
   size, dir, attrs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
  size_t size, enum dma_data_direction dir, unsigned long attrs)
{
 return dma_unmap_page_attrs(dev, addr, size, dir, attrs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_single_range_for_cpu(struct device *dev,
  dma_addr_t addr, unsigned long offset, size_t size,
  enum dma_data_direction dir)
{
 return dma_sync_single_for_cpu(dev, addr + offset, size, dir);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_single_range_for_device(struct device *dev,
  dma_addr_t addr, unsigned long offset, size_t size,
  enum dma_data_direction dir)
{
 return dma_sync_single_for_device(dev, addr + offset, size, dir);
}
# 423 "../include/linux/dma-mapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_unmap_sgtable(struct device *dev, struct sg_table *sgt,
  enum dma_data_direction dir, unsigned long attrs)
{
 dma_unmap_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs);
}
# 441 "../include/linux/dma-mapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_sgtable_for_cpu(struct device *dev,
  struct sg_table *sgt, enum dma_data_direction dir)
{
 dma_sync_sg_for_cpu(dev, sgt->sgl, sgt->orig_nents, dir);
}
# 458 "../include/linux/dma-mapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_sync_sgtable_for_device(struct device *dev,
  struct sg_table *sgt, enum dma_data_direction dir)
{
 dma_sync_sg_for_device(dev, sgt->sgl, sgt->orig_nents, dir);
}
# 473 "../include/linux/dma-mapping.h"
bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dma_alloc_coherent(struct device *dev, size_t size,
  dma_addr_t *dma_handle, gfp_t gfp)
{
 return dma_alloc_attrs(dev, size, dma_handle, gfp,
   (gfp & (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)))) ? (1UL << 8) : 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_free_coherent(struct device *dev, size_t size,
  void *cpu_addr, dma_addr_t dma_handle)
{
 return dma_free_attrs(dev, size, cpu_addr, dma_handle, 0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 dma_get_mask(struct device *dev)
{
 if (dev->dma_mask && *dev->dma_mask)
  return *dev->dma_mask;
 return (((32) == 64) ? ~0ULL : ((1ULL<<(32))-1));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_set_mask_and_coherent(struct device *dev, u64 mask)
{
 int rc = dma_set_mask(dev, mask);
 if (rc == 0)
  dma_set_coherent_mask(dev, mask);
 return rc;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
{
 dev->dma_mask = &dev->coherent_dma_mask;
 return dma_set_mask_and_coherent(dev, mask);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int dma_get_max_seg_size(struct device *dev)
{
 if (dev->dma_parms && dev->dma_parms->max_segment_size)
  return dev->dma_parms->max_segment_size;
 return 0x00010000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_set_max_seg_size(struct device *dev, unsigned int size)
{
 if (dev->dma_parms) {
  dev->dma_parms->max_segment_size = size;
  return 0;
 }
 return -5;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long dma_get_seg_boundary(struct device *dev)
{
 if (dev->dma_parms && dev->dma_parms->segment_boundary_mask)
  return dev->dma_parms->segment_boundary_mask;
 return (~0UL);
}
# 554 "../include/linux/dma-mapping.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long dma_get_seg_boundary_nr_pages(struct device *dev,
  unsigned int page_shift)
{
 if (!dev)
  return (((u32)~0U) >> page_shift) + 1;
 return (dma_get_seg_boundary(dev) >> page_shift) + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_set_seg_boundary(struct device *dev, unsigned long mask)
{
 if (dev->dma_parms) {
  dev->dma_parms->segment_boundary_mask = mask;
  return 0;
 }
 return -5;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int dma_get_min_align_mask(struct device *dev)
{
 if (dev->dma_parms)
  return dev->dma_parms->min_align_mask;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_set_min_align_mask(struct device *dev,
  unsigned int min_align_mask)
{
 if (({ bool __ret_do_once = !!(!dev->dma_parms); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/dma-mapping.h", 581, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return -5;
 dev->dma_parms->min_align_mask = min_align_mask;
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_get_cache_alignment(void)
{

 return (1 << (5));

 return 1;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dmam_alloc_coherent(struct device *dev, size_t size,
  dma_addr_t *dma_handle, gfp_t gfp)
{
 return dmam_alloc_attrs(dev, size, dma_handle, gfp,
   (gfp & (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)))) ? (1UL << 8) : 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dma_alloc_wc(struct device *dev, size_t size,
     dma_addr_t *dma_addr, gfp_t gfp)
{
 unsigned long attrs = (1UL << 2);

 if (gfp & (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT))))
  attrs |= (1UL << 8);

 return dma_alloc_attrs(dev, size, dma_addr, gfp, attrs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dma_free_wc(struct device *dev, size_t size,
          void *cpu_addr, dma_addr_t dma_addr)
{
 return dma_free_attrs(dev, size, cpu_addr, dma_addr,
         (1UL << 2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dma_mmap_wc(struct device *dev,
         struct vm_area_struct *vma,
         void *cpu_addr, dma_addr_t dma_addr,
         size_t size)
{
 return dma_mmap_attrs(dev, vma, cpu_addr, dma_addr, size,
         (1UL << 2));
}
# 29 "../include/linux/skbuff.h" 2
# 1 "../include/linux/netdev_features.h" 1
# 12 "../include/linux/netdev_features.h"
typedef u64 netdev_features_t;

enum {
 NETIF_F_SG_BIT,
 NETIF_F_IP_CSUM_BIT,
 __UNUSED_NETIF_F_1,
 NETIF_F_HW_CSUM_BIT,
 NETIF_F_IPV6_CSUM_BIT,
 NETIF_F_HIGHDMA_BIT,
 NETIF_F_FRAGLIST_BIT,
 NETIF_F_HW_VLAN_CTAG_TX_BIT,
 NETIF_F_HW_VLAN_CTAG_RX_BIT,
 NETIF_F_HW_VLAN_CTAG_FILTER_BIT,
 NETIF_F_VLAN_CHALLENGED_BIT,
 NETIF_F_GSO_BIT,
 NETIF_F_LLTX_BIT,

 NETIF_F_NETNS_LOCAL_BIT,
 NETIF_F_GRO_BIT,
 NETIF_F_LRO_BIT,

     NETIF_F_GSO_SHIFT,
 NETIF_F_TSO_BIT
  = NETIF_F_GSO_SHIFT,
 NETIF_F_GSO_ROBUST_BIT,
 NETIF_F_TSO_ECN_BIT,
 NETIF_F_TSO_MANGLEID_BIT,
 NETIF_F_TSO6_BIT,
 NETIF_F_FSO_BIT,
 NETIF_F_GSO_GRE_BIT,
 NETIF_F_GSO_GRE_CSUM_BIT,
 NETIF_F_GSO_IPXIP4_BIT,
 NETIF_F_GSO_IPXIP6_BIT,
 NETIF_F_GSO_UDP_TUNNEL_BIT,
 NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT,
 NETIF_F_GSO_PARTIAL_BIT,



 NETIF_F_GSO_TUNNEL_REMCSUM_BIT,
 NETIF_F_GSO_SCTP_BIT,
 NETIF_F_GSO_ESP_BIT,
 NETIF_F_GSO_UDP_BIT,
 NETIF_F_GSO_UDP_L4_BIT,
 NETIF_F_GSO_FRAGLIST_BIT,
     NETIF_F_GSO_LAST =
  NETIF_F_GSO_FRAGLIST_BIT,

 NETIF_F_FCOE_CRC_BIT,
 NETIF_F_SCTP_CRC_BIT,
 NETIF_F_FCOE_MTU_BIT,
 NETIF_F_NTUPLE_BIT,
 NETIF_F_RXHASH_BIT,
 NETIF_F_RXCSUM_BIT,
 NETIF_F_NOCACHE_COPY_BIT,
 NETIF_F_LOOPBACK_BIT,
 NETIF_F_RXFCS_BIT,
 NETIF_F_RXALL_BIT,
 NETIF_F_HW_VLAN_STAG_TX_BIT,
 NETIF_F_HW_VLAN_STAG_RX_BIT,
 NETIF_F_HW_VLAN_STAG_FILTER_BIT,
 NETIF_F_HW_L2FW_DOFFLOAD_BIT,

 NETIF_F_HW_TC_BIT,
 NETIF_F_HW_ESP_BIT,
 NETIF_F_HW_ESP_TX_CSUM_BIT,
 NETIF_F_RX_UDP_TUNNEL_PORT_BIT,
 NETIF_F_HW_TLS_TX_BIT,
 NETIF_F_HW_TLS_RX_BIT,

 NETIF_F_GRO_HW_BIT,
 NETIF_F_HW_TLS_RECORD_BIT,
 NETIF_F_GRO_FRAGLIST_BIT,

 NETIF_F_HW_MACSEC_BIT,
 NETIF_F_GRO_UDP_FWD_BIT,

 NETIF_F_HW_HSR_TAG_INS_BIT,
 NETIF_F_HW_HSR_TAG_RM_BIT,
 NETIF_F_HW_HSR_FWD_BIT,
 NETIF_F_HW_HSR_DUP_BIT,
# 101 "../include/linux/netdev_features.h"
     NETDEV_FEATURE_COUNT
};
# 174 "../include/linux/netdev_features.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int find_next_netdev_feature(u64 feature, unsigned long start)
{



 feature &= ~0ULL >> (-start & ((sizeof(feature) * 8) - 1));

 return fls64(feature) - 1;
}
# 30 "../include/linux/skbuff.h" 2
# 1 "../include/net/flow_dissector.h" 1





# 1 "../include/linux/in6.h" 1
# 19 "../include/linux/in6.h"
# 1 "../include/uapi/linux/in6.h" 1
# 33 "../include/uapi/linux/in6.h"
struct in6_addr {
 union {
  __u8 u6_addr8[16];

  __be16 u6_addr16[8];
  __be32 u6_addr32[4];

 } in6_u;





};



struct sockaddr_in6 {
 unsigned short int sin6_family;
 __be16 sin6_port;
 __be32 sin6_flowinfo;
 struct in6_addr sin6_addr;
 __u32 sin6_scope_id;
};



struct ipv6_mreq {

 struct in6_addr ipv6mr_multiaddr;


 int ipv6mr_ifindex;
};




struct in6_flowlabel_req {
 struct in6_addr flr_dst;
 __be32 flr_label;
 __u8 flr_action;
 __u8 flr_share;
 __u16 flr_flags;
 __u16 flr_expires;
 __u16 flr_linger;
 __u32 __flr_pad;

};
# 20 "../include/linux/in6.h" 2





extern const struct in6_addr in6addr_any;

extern const struct in6_addr in6addr_loopback;

extern const struct in6_addr in6addr_linklocal_allnodes;


extern const struct in6_addr in6addr_linklocal_allrouters;


extern const struct in6_addr in6addr_interfacelocal_allnodes;


extern const struct in6_addr in6addr_interfacelocal_allrouters;


extern const struct in6_addr in6addr_sitelocal_allrouters;
# 7 "../include/net/flow_dissector.h" 2
# 1 "../include/linux/siphash.h" 1
# 19 "../include/linux/siphash.h"
typedef struct {
 u64 key[2];
} siphash_key_t;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool siphash_key_is_zero(const siphash_key_t *key)
{
 return !(key->key[0] | key->key[1]);
}

u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);

u64 siphash_1u64(const u64 a, const siphash_key_t *key);
u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key);
u64 siphash_3u64(const u64 a, const u64 b, const u64 c,
   const siphash_key_t *key);
u64 siphash_4u64(const u64 a, const u64 b, const u64 c, const u64 d,
   const siphash_key_t *key);
u64 siphash_1u32(const u32 a, const siphash_key_t *key);
u64 siphash_3u32(const u32 a, const u32 b, const u32 c,
   const siphash_key_t *key);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 siphash_2u32(const u32 a, const u32 b,
          const siphash_key_t *key)
{
 return siphash_1u64((u64)b << 32 | a, key);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 siphash_4u32(const u32 a, const u32 b, const u32 c,
          const u32 d, const siphash_key_t *key)
{
 return siphash_2u64((u64)b << 32 | a, (u64)d << 32 | c, key);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ___siphash_aligned(const __le64 *data, size_t len,
         const siphash_key_t *key)
{
 if (__builtin_constant_p(len) && len == 4)
  return siphash_1u32(__le32_to_cpup((const __le32 *)data), key);
 if (__builtin_constant_p(len) && len == 8)
  return siphash_1u64((( __u64)(__le64)(data[0])), key);
 if (__builtin_constant_p(len) && len == 16)
  return siphash_2u64((( __u64)(__le64)(data[0])), (( __u64)(__le64)(data[1])),
        key);
 if (__builtin_constant_p(len) && len == 24)
  return siphash_3u64((( __u64)(__le64)(data[0])), (( __u64)(__le64)(data[1])),
        (( __u64)(__le64)(data[2])), key);
 if (__builtin_constant_p(len) && len == 32)
  return siphash_4u64((( __u64)(__le64)(data[0])), (( __u64)(__le64)(data[1])),
        (( __u64)(__le64)(data[2])), (( __u64)(__le64)(data[3])),
        key);
 return __siphash_aligned(data, len, key);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 siphash(const void *data, size_t len,
     const siphash_key_t *key)
{
 if (0 ||
     !((((unsigned long)data) & ((typeof((unsigned long)data))(__alignof__(u64)) - 1)) == 0))
  return __siphash_unaligned(data, len, key);
 return ___siphash_aligned(data, len, key);
}


typedef struct {
 unsigned long key[2];
} hsiphash_key_t;

u32 __hsiphash_aligned(const void *data, size_t len,
         const hsiphash_key_t *key);
u32 __hsiphash_unaligned(const void *data, size_t len,
    const hsiphash_key_t *key);

u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key);
u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key);
u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c,
    const hsiphash_key_t *key);
u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
    const hsiphash_key_t *key);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ___hsiphash_aligned(const __le32 *data, size_t len,
          const hsiphash_key_t *key)
{
 if (__builtin_constant_p(len) && len == 4)
  return hsiphash_1u32((( __u32)(__le32)(data[0])), key);
 if (__builtin_constant_p(len) && len == 8)
  return hsiphash_2u32((( __u32)(__le32)(data[0])), (( __u32)(__le32)(data[1])),
         key);
 if (__builtin_constant_p(len) && len == 12)
  return hsiphash_3u32((( __u32)(__le32)(data[0])), (( __u32)(__le32)(data[1])),
         (( __u32)(__le32)(data[2])), key);
 if (__builtin_constant_p(len) && len == 16)
  return hsiphash_4u32((( __u32)(__le32)(data[0])), (( __u32)(__le32)(data[1])),
         (( __u32)(__le32)(data[2])), (( __u32)(__le32)(data[3])),
         key);
 return __hsiphash_aligned(data, len, key);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 hsiphash(const void *data, size_t len,
      const hsiphash_key_t *key)
{
 if (0 ||
     !((((unsigned long)data) & ((typeof((unsigned long)data))(__alignof__(unsigned long)) - 1)) == 0))
  return __hsiphash_unaligned(data, len, key);
 return ___hsiphash_aligned(data, len, key);
}
# 8 "../include/net/flow_dissector.h" 2

# 1 "../include/uapi/linux/if_ether.h" 1
# 173 "../include/uapi/linux/if_ether.h"
struct ethhdr {
 unsigned char h_dest[6];
 unsigned char h_source[6];
 __be16 h_proto;
} __attribute__((packed));
# 10 "../include/net/flow_dissector.h" 2
# 1 "../include/uapi/linux/pkt_cls.h" 1





# 1 "../include/uapi/linux/pkt_sched.h" 1
# 34 "../include/uapi/linux/pkt_sched.h"
struct tc_stats {
 __u64 bytes;
 __u32 packets;
 __u32 drops;
 __u32 overlimits;

 __u32 bps;
 __u32 pps;
 __u32 qlen;
 __u32 backlog;
};

struct tc_estimator {
 signed char interval;
 unsigned char ewma_log;
};
# 84 "../include/uapi/linux/pkt_sched.h"
enum tc_link_layer {
 TC_LINKLAYER_UNAWARE,
 TC_LINKLAYER_ETHERNET,
 TC_LINKLAYER_ATM,
};


struct tc_ratespec {
 unsigned char cell_log;
 __u8 linklayer;
 unsigned short overhead;
 short cell_align;
 unsigned short mpu;
 __u32 rate;
};



struct tc_sizespec {
 unsigned char cell_log;
 unsigned char size_log;
 short cell_align;
 int overhead;
 unsigned int linklayer;
 unsigned int mpu;
 unsigned int mtu;
 unsigned int tsize;
};

enum {
 TCA_STAB_UNSPEC,
 TCA_STAB_BASE,
 TCA_STAB_DATA,
 __TCA_STAB_MAX
};





struct tc_fifo_qopt {
 __u32 limit;
};
# 139 "../include/uapi/linux/pkt_sched.h"
struct tc_skbprio_qopt {
 __u32 limit;
};






struct tc_prio_qopt {
 int bands;
 __u8 priomap[15 +1];
};



struct tc_multiq_qopt {
 __u16 bands;
 __u16 max_bands;
};
# 167 "../include/uapi/linux/pkt_sched.h"
struct tc_plug_qopt {
# 177 "../include/uapi/linux/pkt_sched.h"
 int action;
 __u32 limit;
};



struct tc_tbf_qopt {
 struct tc_ratespec rate;
 struct tc_ratespec peakrate;
 __u32 limit;
 __u32 buffer;
 __u32 mtu;
};

enum {
 TCA_TBF_UNSPEC,
 TCA_TBF_PARMS,
 TCA_TBF_RTAB,
 TCA_TBF_PTAB,
 TCA_TBF_RATE64,
 TCA_TBF_PRATE64,
 TCA_TBF_BURST,
 TCA_TBF_PBURST,
 TCA_TBF_PAD,
 __TCA_TBF_MAX,
};
# 213 "../include/uapi/linux/pkt_sched.h"
struct tc_sfq_qopt {
 unsigned quantum;
 int perturb_period;
 __u32 limit;
 unsigned divisor;
 unsigned flows;
};

struct tc_sfqred_stats {
 __u32 prob_drop;
 __u32 forced_drop;
 __u32 prob_mark;
 __u32 forced_mark;
 __u32 prob_mark_head;
 __u32 forced_mark_head;
};

struct tc_sfq_qopt_v1 {
 struct tc_sfq_qopt v0;
 unsigned int depth;
 unsigned int headdrop;

 __u32 limit;
 __u32 qth_min;
 __u32 qth_max;
 unsigned char Wlog;
 unsigned char Plog;
 unsigned char Scell_log;
 unsigned char flags;
 __u32 max_P;

 struct tc_sfqred_stats stats;
};


struct tc_sfq_xstats {
 __s32 allot;
};



enum {
 TCA_RED_UNSPEC,
 TCA_RED_PARMS,
 TCA_RED_STAB,
 TCA_RED_MAX_P,
 TCA_RED_FLAGS,
 TCA_RED_EARLY_DROP_BLOCK,
 TCA_RED_MARK_BLOCK,
 __TCA_RED_MAX,
};



struct tc_red_qopt {
 __u32 limit;
 __u32 qth_min;
 __u32 qth_max;
 unsigned char Wlog;
 unsigned char Plog;
 unsigned char Scell_log;
# 287 "../include/uapi/linux/pkt_sched.h"
 unsigned char flags;




};



struct tc_red_xstats {
 __u32 early;
 __u32 pdrop;
 __u32 other;
 __u32 marked;
};





enum {
       TCA_GRED_UNSPEC,
       TCA_GRED_PARMS,
       TCA_GRED_STAB,
       TCA_GRED_DPS,
       TCA_GRED_MAX_P,
       TCA_GRED_LIMIT,
       TCA_GRED_VQ_LIST,
       __TCA_GRED_MAX,
};



enum {
 TCA_GRED_VQ_ENTRY_UNSPEC,
 TCA_GRED_VQ_ENTRY,
 __TCA_GRED_VQ_ENTRY_MAX,
};


enum {
 TCA_GRED_VQ_UNSPEC,
 TCA_GRED_VQ_PAD,
 TCA_GRED_VQ_DP,
 TCA_GRED_VQ_STAT_BYTES,
 TCA_GRED_VQ_STAT_PACKETS,
 TCA_GRED_VQ_STAT_BACKLOG,
 TCA_GRED_VQ_STAT_PROB_DROP,
 TCA_GRED_VQ_STAT_PROB_MARK,
 TCA_GRED_VQ_STAT_FORCED_DROP,
 TCA_GRED_VQ_STAT_FORCED_MARK,
 TCA_GRED_VQ_STAT_PDROP,
 TCA_GRED_VQ_STAT_OTHER,
 TCA_GRED_VQ_FLAGS,
 __TCA_GRED_VQ_MAX
};



struct tc_gred_qopt {
 __u32 limit;
 __u32 qth_min;
 __u32 qth_max;
 __u32 DP;
 __u32 backlog;
 __u32 qave;
 __u32 forced;
 __u32 early;
 __u32 other;
 __u32 pdrop;
 __u8 Wlog;
 __u8 Plog;
 __u8 Scell_log;
 __u8 prio;
 __u32 packets;
 __u32 bytesin;
};


struct tc_gred_sopt {
 __u32 DPs;
 __u32 def_DP;
 __u8 grio;
 __u8 flags;
 __u16 pad1;
};



enum {
 TCA_CHOKE_UNSPEC,
 TCA_CHOKE_PARMS,
 TCA_CHOKE_STAB,
 TCA_CHOKE_MAX_P,
 __TCA_CHOKE_MAX,
};



struct tc_choke_qopt {
 __u32 limit;
 __u32 qth_min;
 __u32 qth_max;
 unsigned char Wlog;
 unsigned char Plog;
 unsigned char Scell_log;
 unsigned char flags;
};

struct tc_choke_xstats {
 __u32 early;
 __u32 pdrop;
 __u32 other;
 __u32 marked;
 __u32 matched;
};






struct tc_htb_opt {
 struct tc_ratespec rate;
 struct tc_ratespec ceil;
 __u32 buffer;
 __u32 cbuffer;
 __u32 quantum;
 __u32 level;
 __u32 prio;
};
struct tc_htb_glob {
 __u32 version;
     __u32 rate2quantum;
     __u32 defcls;
 __u32 debug;


 __u32 direct_pkts;
};
enum {
 TCA_HTB_UNSPEC,
 TCA_HTB_PARMS,
 TCA_HTB_INIT,
 TCA_HTB_CTAB,
 TCA_HTB_RTAB,
 TCA_HTB_DIRECT_QLEN,
 TCA_HTB_RATE64,
 TCA_HTB_CEIL64,
 TCA_HTB_PAD,
 TCA_HTB_OFFLOAD,
 __TCA_HTB_MAX,
};



struct tc_htb_xstats {
 __u32 lends;
 __u32 borrows;
 __u32 giants;
 __s32 tokens;
 __s32 ctokens;
};



struct tc_hfsc_qopt {
 __u16 defcls;
};

struct tc_service_curve {
 __u32 m1;
 __u32 d;
 __u32 m2;
};

struct tc_hfsc_stats {
 __u64 work;
 __u64 rtwork;
 __u32 period;
 __u32 level;
};

enum {
 TCA_HFSC_UNSPEC,
 TCA_HFSC_RSC,
 TCA_HFSC_FSC,
 TCA_HFSC_USC,
 __TCA_HFSC_MAX,
};





enum {
 TCA_NETEM_UNSPEC,
 TCA_NETEM_CORR,
 TCA_NETEM_DELAY_DIST,
 TCA_NETEM_REORDER,
 TCA_NETEM_CORRUPT,
 TCA_NETEM_LOSS,
 TCA_NETEM_RATE,
 TCA_NETEM_ECN,
 TCA_NETEM_RATE64,
 TCA_NETEM_PAD,
 TCA_NETEM_LATENCY64,
 TCA_NETEM_JITTER64,
 TCA_NETEM_SLOT,
 TCA_NETEM_SLOT_DIST,
 TCA_NETEM_PRNG_SEED,
 __TCA_NETEM_MAX,
};



struct tc_netem_qopt {
 __u32 latency;
 __u32 limit;
 __u32 loss;
 __u32 gap;
 __u32 duplicate;
 __u32 jitter;
};

struct tc_netem_corr {
 __u32 delay_corr;
 __u32 loss_corr;
 __u32 dup_corr;
};

struct tc_netem_reorder {
 __u32 probability;
 __u32 correlation;
};

struct tc_netem_corrupt {
 __u32 probability;
 __u32 correlation;
};

struct tc_netem_rate {
 __u32 rate;
 __s32 packet_overhead;
 __u32 cell_size;
 __s32 cell_overhead;
};

struct tc_netem_slot {
 __s64 min_delay;
 __s64 max_delay;
 __s32 max_packets;
 __s32 max_bytes;
 __s64 dist_delay;
 __s64 dist_jitter;
};

enum {
 NETEM_LOSS_UNSPEC,
 NETEM_LOSS_GI,
 NETEM_LOSS_GE,
 __NETEM_LOSS_MAX
};



struct tc_netem_gimodel {
 __u32 p13;
 __u32 p31;
 __u32 p32;
 __u32 p14;
 __u32 p23;
};


struct tc_netem_gemodel {
 __u32 p;
 __u32 r;
 __u32 h;
 __u32 k1;
};






enum {
 TCA_DRR_UNSPEC,
 TCA_DRR_QUANTUM,
 __TCA_DRR_MAX
};



struct tc_drr_stats {
 __u32 deficit;
};





enum {
 TC_MQPRIO_HW_OFFLOAD_NONE,
 TC_MQPRIO_HW_OFFLOAD_TCS,
 __TC_MQPRIO_HW_OFFLOAD_MAX
};



enum {
 TC_MQPRIO_MODE_DCB,
 TC_MQPRIO_MODE_CHANNEL,
 __TC_MQPRIO_MODE_MAX
};



enum {
 TC_MQPRIO_SHAPER_DCB,
 TC_MQPRIO_SHAPER_BW_RATE,
 __TC_MQPRIO_SHAPER_MAX
};



enum {
 TC_FP_EXPRESS = 1,
 TC_FP_PREEMPTIBLE = 2,
};

struct tc_mqprio_qopt {
 __u8 num_tc;
 __u8 prio_tc_map[15 + 1];
 __u8 hw;
 __u16 count[16];
 __u16 offset[16];
};






enum {
 TCA_MQPRIO_TC_ENTRY_UNSPEC,
 TCA_MQPRIO_TC_ENTRY_INDEX,
 TCA_MQPRIO_TC_ENTRY_FP,


 __TCA_MQPRIO_TC_ENTRY_CNT,
 TCA_MQPRIO_TC_ENTRY_MAX = (__TCA_MQPRIO_TC_ENTRY_CNT - 1)
};

enum {
 TCA_MQPRIO_UNSPEC,
 TCA_MQPRIO_MODE,
 TCA_MQPRIO_SHAPER,
 TCA_MQPRIO_MIN_RATE64,
 TCA_MQPRIO_MAX_RATE64,
 TCA_MQPRIO_TC_ENTRY,
 __TCA_MQPRIO_MAX,
};





enum {
 TCA_SFB_UNSPEC,
 TCA_SFB_PARMS,
 __TCA_SFB_MAX,
};






struct tc_sfb_qopt {
 __u32 rehash_interval;
 __u32 warmup_time;
 __u32 max;
 __u32 bin_size;
 __u32 increment;
 __u32 decrement;
 __u32 limit;
 __u32 penalty_rate;
 __u32 penalty_burst;
};

struct tc_sfb_xstats {
 __u32 earlydrop;
 __u32 penaltydrop;
 __u32 bucketdrop;
 __u32 queuedrop;
 __u32 childdrop;
 __u32 marked;
 __u32 maxqlen;
 __u32 maxprob;
 __u32 avgprob;
};




enum {
 TCA_QFQ_UNSPEC,
 TCA_QFQ_WEIGHT,
 TCA_QFQ_LMAX,
 __TCA_QFQ_MAX
};



struct tc_qfq_stats {
 __u32 weight;
 __u32 lmax;
};



enum {
 TCA_CODEL_UNSPEC,
 TCA_CODEL_TARGET,
 TCA_CODEL_LIMIT,
 TCA_CODEL_INTERVAL,
 TCA_CODEL_ECN,
 TCA_CODEL_CE_THRESHOLD,
 __TCA_CODEL_MAX
};



struct tc_codel_xstats {
 __u32 maxpacket;
 __u32 count;


 __u32 lastcount;
 __u32 ldelay;
 __s32 drop_next;
 __u32 drop_overlimit;
 __u32 ecn_mark;
 __u32 dropping;
 __u32 ce_mark;
};





enum {
 TCA_FQ_CODEL_UNSPEC,
 TCA_FQ_CODEL_TARGET,
 TCA_FQ_CODEL_LIMIT,
 TCA_FQ_CODEL_INTERVAL,
 TCA_FQ_CODEL_ECN,
 TCA_FQ_CODEL_FLOWS,
 TCA_FQ_CODEL_QUANTUM,
 TCA_FQ_CODEL_CE_THRESHOLD,
 TCA_FQ_CODEL_DROP_BATCH_SIZE,
 TCA_FQ_CODEL_MEMORY_LIMIT,
 TCA_FQ_CODEL_CE_THRESHOLD_SELECTOR,
 TCA_FQ_CODEL_CE_THRESHOLD_MASK,
 __TCA_FQ_CODEL_MAX
};



enum {
 TCA_FQ_CODEL_XSTATS_QDISC,
 TCA_FQ_CODEL_XSTATS_CLASS,
};

struct tc_fq_codel_qd_stats {
 __u32 maxpacket;
 __u32 drop_overlimit;


 __u32 ecn_mark;


 __u32 new_flow_count;


 __u32 new_flows_len;
 __u32 old_flows_len;
 __u32 ce_mark;
 __u32 memory_usage;
 __u32 drop_overmemory;
};

struct tc_fq_codel_cl_stats {
 __s32 deficit;
 __u32 ldelay;


 __u32 count;
 __u32 lastcount;
 __u32 dropping;
 __s32 drop_next;
};

struct tc_fq_codel_xstats {
 __u32 type;
 union {
  struct tc_fq_codel_qd_stats qdisc_stats;
  struct tc_fq_codel_cl_stats class_stats;
 };
};



enum {
 TCA_FQ_UNSPEC,

 TCA_FQ_PLIMIT,

 TCA_FQ_FLOW_PLIMIT,

 TCA_FQ_QUANTUM,

 TCA_FQ_INITIAL_QUANTUM,

 TCA_FQ_RATE_ENABLE,

 TCA_FQ_FLOW_DEFAULT_RATE,

 TCA_FQ_FLOW_MAX_RATE,

 TCA_FQ_BUCKETS_LOG,

 TCA_FQ_FLOW_REFILL_DELAY,

 TCA_FQ_ORPHAN_MASK,

 TCA_FQ_LOW_RATE_THRESHOLD,

 TCA_FQ_CE_THRESHOLD,

 TCA_FQ_TIMER_SLACK,

 TCA_FQ_HORIZON,

 TCA_FQ_HORIZON_DROP,

 TCA_FQ_PRIOMAP,

 TCA_FQ_WEIGHTS,

 __TCA_FQ_MAX
};






struct tc_fq_qd_stats {
 __u64 gc_flows;
 __u64 highprio_packets;
 __u64 tcp_retrans;
 __u64 throttled;
 __u64 flows_plimit;
 __u64 pkts_too_long;
 __u64 allocation_errors;
 __s64 time_next_delayed_flow;
 __u32 flows;
 __u32 inactive_flows;
 __u32 throttled_flows;
 __u32 unthrottle_latency_ns;
 __u64 ce_mark;
 __u64 horizon_drops;
 __u64 horizon_caps;
 __u64 fastpath_packets;
 __u64 band_drops[3];
 __u32 band_pkt_count[3];
 __u32 pad;
};



enum {
 TCA_HHF_UNSPEC,
 TCA_HHF_BACKLOG_LIMIT,
 TCA_HHF_QUANTUM,
 TCA_HHF_HH_FLOWS_LIMIT,
 TCA_HHF_RESET_TIMEOUT,
 TCA_HHF_ADMIT_BYTES,
 TCA_HHF_EVICT_TIMEOUT,
 TCA_HHF_NON_HH_WEIGHT,
 __TCA_HHF_MAX
};



struct tc_hhf_xstats {
 __u32 drop_overlimit;


 __u32 hh_overlimit;
 __u32 hh_tot_count;
 __u32 hh_cur_count;
};


enum {
 TCA_PIE_UNSPEC,
 TCA_PIE_TARGET,
 TCA_PIE_LIMIT,
 TCA_PIE_TUPDATE,
 TCA_PIE_ALPHA,
 TCA_PIE_BETA,
 TCA_PIE_ECN,
 TCA_PIE_BYTEMODE,
 TCA_PIE_DQ_RATE_ESTIMATOR,
 __TCA_PIE_MAX
};


struct tc_pie_xstats {
 __u64 prob;
 __u32 delay;
 __u32 avg_dq_rate;


 __u32 dq_rate_estimating;
 __u32 packets_in;
 __u32 dropped;
 __u32 overlimit;


 __u32 maxq;
 __u32 ecn_mark;
};


enum {
 TCA_FQ_PIE_UNSPEC,
 TCA_FQ_PIE_LIMIT,
 TCA_FQ_PIE_FLOWS,
 TCA_FQ_PIE_TARGET,
 TCA_FQ_PIE_TUPDATE,
 TCA_FQ_PIE_ALPHA,
 TCA_FQ_PIE_BETA,
 TCA_FQ_PIE_QUANTUM,
 TCA_FQ_PIE_MEMORY_LIMIT,
 TCA_FQ_PIE_ECN_PROB,
 TCA_FQ_PIE_ECN,
 TCA_FQ_PIE_BYTEMODE,
 TCA_FQ_PIE_DQ_RATE_ESTIMATOR,
 __TCA_FQ_PIE_MAX
};


struct tc_fq_pie_xstats {
 __u32 packets_in;
 __u32 dropped;
 __u32 overlimit;
 __u32 overmemory;
 __u32 ecn_mark;
 __u32 new_flow_count;
 __u32 new_flows_len;
 __u32 old_flows_len;
 __u32 memory_usage;
};


struct tc_cbs_qopt {
 __u8 offload;
 __u8 _pad[3];
 __s32 hicredit;
 __s32 locredit;
 __s32 idleslope;
 __s32 sendslope;
};

enum {
 TCA_CBS_UNSPEC,
 TCA_CBS_PARMS,
 __TCA_CBS_MAX,
};





struct tc_etf_qopt {
 __s32 delta;
 __s32 clockid;
 __u32 flags;



};

enum {
 TCA_ETF_UNSPEC,
 TCA_ETF_PARMS,
 __TCA_ETF_MAX,
};





enum {
 TCA_CAKE_UNSPEC,
 TCA_CAKE_PAD,
 TCA_CAKE_BASE_RATE64,
 TCA_CAKE_DIFFSERV_MODE,
 TCA_CAKE_ATM,
 TCA_CAKE_FLOW_MODE,
 TCA_CAKE_OVERHEAD,
 TCA_CAKE_RTT,
 TCA_CAKE_TARGET,
 TCA_CAKE_AUTORATE,
 TCA_CAKE_MEMORY,
 TCA_CAKE_NAT,
 TCA_CAKE_RAW,
 TCA_CAKE_WASH,
 TCA_CAKE_MPU,
 TCA_CAKE_INGRESS,
 TCA_CAKE_ACK_FILTER,
 TCA_CAKE_SPLIT_GSO,
 TCA_CAKE_FWMARK,
 __TCA_CAKE_MAX
};


enum {
 __TCA_CAKE_STATS_INVALID,
 TCA_CAKE_STATS_PAD,
 TCA_CAKE_STATS_CAPACITY_ESTIMATE64,
 TCA_CAKE_STATS_MEMORY_LIMIT,
 TCA_CAKE_STATS_MEMORY_USED,
 TCA_CAKE_STATS_AVG_NETOFF,
 TCA_CAKE_STATS_MIN_NETLEN,
 TCA_CAKE_STATS_MAX_NETLEN,
 TCA_CAKE_STATS_MIN_ADJLEN,
 TCA_CAKE_STATS_MAX_ADJLEN,
 TCA_CAKE_STATS_TIN_STATS,
 TCA_CAKE_STATS_DEFICIT,
 TCA_CAKE_STATS_COBALT_COUNT,
 TCA_CAKE_STATS_DROPPING,
 TCA_CAKE_STATS_DROP_NEXT_US,
 TCA_CAKE_STATS_P_DROP,
 TCA_CAKE_STATS_BLUE_TIMER_US,
 __TCA_CAKE_STATS_MAX
};


enum {
 __TCA_CAKE_TIN_STATS_INVALID,
 TCA_CAKE_TIN_STATS_PAD,
 TCA_CAKE_TIN_STATS_SENT_PACKETS,
 TCA_CAKE_TIN_STATS_SENT_BYTES64,
 TCA_CAKE_TIN_STATS_DROPPED_PACKETS,
 TCA_CAKE_TIN_STATS_DROPPED_BYTES64,
 TCA_CAKE_TIN_STATS_ACKS_DROPPED_PACKETS,
 TCA_CAKE_TIN_STATS_ACKS_DROPPED_BYTES64,
 TCA_CAKE_TIN_STATS_ECN_MARKED_PACKETS,
 TCA_CAKE_TIN_STATS_ECN_MARKED_BYTES64,
 TCA_CAKE_TIN_STATS_BACKLOG_PACKETS,
 TCA_CAKE_TIN_STATS_BACKLOG_BYTES,
 TCA_CAKE_TIN_STATS_THRESHOLD_RATE64,
 TCA_CAKE_TIN_STATS_TARGET_US,
 TCA_CAKE_TIN_STATS_INTERVAL_US,
 TCA_CAKE_TIN_STATS_WAY_INDIRECT_HITS,
 TCA_CAKE_TIN_STATS_WAY_MISSES,
 TCA_CAKE_TIN_STATS_WAY_COLLISIONS,
 TCA_CAKE_TIN_STATS_PEAK_DELAY_US,
 TCA_CAKE_TIN_STATS_AVG_DELAY_US,
 TCA_CAKE_TIN_STATS_BASE_DELAY_US,
 TCA_CAKE_TIN_STATS_SPARSE_FLOWS,
 TCA_CAKE_TIN_STATS_BULK_FLOWS,
 TCA_CAKE_TIN_STATS_UNRESPONSIVE_FLOWS,
 TCA_CAKE_TIN_STATS_MAX_SKBLEN,
 TCA_CAKE_TIN_STATS_FLOW_QUANTUM,
 __TCA_CAKE_TIN_STATS_MAX
};



enum {
 CAKE_FLOW_NONE = 0,
 CAKE_FLOW_SRC_IP,
 CAKE_FLOW_DST_IP,
 CAKE_FLOW_HOSTS,
 CAKE_FLOW_FLOWS,
 CAKE_FLOW_DUAL_SRC,
 CAKE_FLOW_DUAL_DST,
 CAKE_FLOW_TRIPLE,
 CAKE_FLOW_MAX,
};

enum {
 CAKE_DIFFSERV_DIFFSERV3 = 0,
 CAKE_DIFFSERV_DIFFSERV4,
 CAKE_DIFFSERV_DIFFSERV8,
 CAKE_DIFFSERV_BESTEFFORT,
 CAKE_DIFFSERV_PRECEDENCE,
 CAKE_DIFFSERV_MAX
};

enum {
 CAKE_ACK_NONE = 0,
 CAKE_ACK_FILTER,
 CAKE_ACK_AGGRESSIVE,
 CAKE_ACK_MAX
};

enum {
 CAKE_ATM_NONE = 0,
 CAKE_ATM_ATM,
 CAKE_ATM_PTM,
 CAKE_ATM_MAX
};



enum {
 TC_TAPRIO_CMD_SET_GATES = 0x00,
 TC_TAPRIO_CMD_SET_AND_HOLD = 0x01,
 TC_TAPRIO_CMD_SET_AND_RELEASE = 0x02,
};

enum {
 TCA_TAPRIO_SCHED_ENTRY_UNSPEC,
 TCA_TAPRIO_SCHED_ENTRY_INDEX,
 TCA_TAPRIO_SCHED_ENTRY_CMD,
 TCA_TAPRIO_SCHED_ENTRY_GATE_MASK,
 TCA_TAPRIO_SCHED_ENTRY_INTERVAL,
 __TCA_TAPRIO_SCHED_ENTRY_MAX,
};
# 1133 "../include/uapi/linux/pkt_sched.h"
enum {
 TCA_TAPRIO_SCHED_UNSPEC,
 TCA_TAPRIO_SCHED_ENTRY,
 __TCA_TAPRIO_SCHED_MAX,
};
# 1154 "../include/uapi/linux/pkt_sched.h"
enum {
 TCA_TAPRIO_TC_ENTRY_UNSPEC,
 TCA_TAPRIO_TC_ENTRY_INDEX,
 TCA_TAPRIO_TC_ENTRY_MAX_SDU,
 TCA_TAPRIO_TC_ENTRY_FP,


 __TCA_TAPRIO_TC_ENTRY_CNT,
 TCA_TAPRIO_TC_ENTRY_MAX = (__TCA_TAPRIO_TC_ENTRY_CNT - 1)
};

enum {
 TCA_TAPRIO_OFFLOAD_STATS_PAD = 1,
 TCA_TAPRIO_OFFLOAD_STATS_WINDOW_DROPS,
 TCA_TAPRIO_OFFLOAD_STATS_TX_OVERRUNS,


 __TCA_TAPRIO_OFFLOAD_STATS_CNT,
 TCA_TAPRIO_OFFLOAD_STATS_MAX = (__TCA_TAPRIO_OFFLOAD_STATS_CNT - 1)
};

enum {
 TCA_TAPRIO_ATTR_UNSPEC,
 TCA_TAPRIO_ATTR_PRIOMAP,
 TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST,
 TCA_TAPRIO_ATTR_SCHED_BASE_TIME,
 TCA_TAPRIO_ATTR_SCHED_SINGLE_ENTRY,
 TCA_TAPRIO_ATTR_SCHED_CLOCKID,
 TCA_TAPRIO_PAD,
 TCA_TAPRIO_ATTR_ADMIN_SCHED,
 TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME,
 TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION,
 TCA_TAPRIO_ATTR_FLAGS,
 TCA_TAPRIO_ATTR_TXTIME_DELAY,
 TCA_TAPRIO_ATTR_TC_ENTRY,
 __TCA_TAPRIO_ATTR_MAX,
};







enum {
 TCA_ETS_UNSPEC,
 TCA_ETS_NBANDS,
 TCA_ETS_NSTRICT,
 TCA_ETS_QUANTA,
 TCA_ETS_QUANTA_BAND,
 TCA_ETS_PRIOMAP,
 TCA_ETS_PRIOMAP_BAND,
 __TCA_ETS_MAX,
};
# 7 "../include/uapi/linux/pkt_cls.h" 2




enum {
 TCA_ACT_UNSPEC,
 TCA_ACT_KIND,
 TCA_ACT_OPTIONS,
 TCA_ACT_INDEX,
 TCA_ACT_STATS,
 TCA_ACT_PAD,
 TCA_ACT_COOKIE,
 TCA_ACT_FLAGS,
 TCA_ACT_HW_STATS,
 TCA_ACT_USED_HW_STATS,
 TCA_ACT_IN_HW_COUNT,
 __TCA_ACT_MAX
};
# 119 "../include/uapi/linux/pkt_cls.h"
enum tca_id {
 TCA_ID_UNSPEC = 0,
 TCA_ID_POLICE = 1,
 TCA_ID_GACT = 5,
 TCA_ID_IPT = 6,
 TCA_ID_PEDIT = 7,
 TCA_ID_MIRRED = 8,
 TCA_ID_NAT = 9,
 TCA_ID_XT = 10,
 TCA_ID_SKBEDIT = 11,
 TCA_ID_VLAN = 12,
 TCA_ID_BPF = 13,
 TCA_ID_CONNMARK = 14,
 TCA_ID_SKBMOD = 15,
 TCA_ID_CSUM = 16,
 TCA_ID_TUNNEL_KEY = 17,
 TCA_ID_SIMP = 22,
 TCA_ID_IFE = 25,
 TCA_ID_SAMPLE = 26,
 TCA_ID_CTINFO,
 TCA_ID_MPLS,
 TCA_ID_CT,
 TCA_ID_GATE,

 __TCA_ID_MAX = 255
};



struct tc_police {
 __u32 index;
 int action;






 __u32 limit;
 __u32 burst;
 __u32 mtu;
 struct tc_ratespec rate;
 struct tc_ratespec peakrate;
 int refcnt;
 int bindcnt;
 __u32 capab;
};

struct tcf_t {
 __u64 install;
 __u64 lastuse;
 __u64 expires;
 __u64 firstuse;
};

struct tc_cnt {
 int refcnt;
 int bindcnt;
};
# 186 "../include/uapi/linux/pkt_cls.h"
enum {
 TCA_POLICE_UNSPEC,
 TCA_POLICE_TBF,
 TCA_POLICE_RATE,
 TCA_POLICE_PEAKRATE,
 TCA_POLICE_AVRATE,
 TCA_POLICE_RESULT,
 TCA_POLICE_TM,
 TCA_POLICE_PAD,
 TCA_POLICE_RATE64,
 TCA_POLICE_PEAKRATE64,
 TCA_POLICE_PKTRATE64,
 TCA_POLICE_PKTBURST64,
 __TCA_POLICE_MAX

};
# 222 "../include/uapi/linux/pkt_cls.h"
enum {
 TCA_U32_UNSPEC,
 TCA_U32_CLASSID,
 TCA_U32_HASH,
 TCA_U32_LINK,
 TCA_U32_DIVISOR,
 TCA_U32_SEL,
 TCA_U32_POLICE,
 TCA_U32_ACT,
 TCA_U32_INDEV,
 TCA_U32_PCNT,
 TCA_U32_MARK,
 TCA_U32_FLAGS,
 TCA_U32_PAD,
 __TCA_U32_MAX
};



struct tc_u32_key {
 __be32 mask;
 __be32 val;
 int off;
 int offmask;
};

struct tc_u32_sel {
 unsigned char flags;
 unsigned char offshift;
 unsigned char nkeys;

 __be16 offmask;
 __u16 off;
 short offoff;

 short hoff;
 __be32 hmask;
 struct tc_u32_key keys[];
};

struct tc_u32_mark {
 __u32 val;
 __u32 mask;
 __u32 success;
};

struct tc_u32_pcnt {
 __u64 rcnt;
 __u64 rhit;
 __u64 kcnts[];
};
# 285 "../include/uapi/linux/pkt_cls.h"
enum {
 TCA_ROUTE4_UNSPEC,
 TCA_ROUTE4_CLASSID,
 TCA_ROUTE4_TO,
 TCA_ROUTE4_FROM,
 TCA_ROUTE4_IIF,
 TCA_ROUTE4_POLICE,
 TCA_ROUTE4_ACT,
 __TCA_ROUTE4_MAX
};






enum {
 TCA_FW_UNSPEC,
 TCA_FW_CLASSID,
 TCA_FW_POLICE,
 TCA_FW_INDEV,
 TCA_FW_ACT,
 TCA_FW_MASK,
 __TCA_FW_MAX
};





enum {
 FLOW_KEY_SRC,
 FLOW_KEY_DST,
 FLOW_KEY_PROTO,
 FLOW_KEY_PROTO_SRC,
 FLOW_KEY_PROTO_DST,
 FLOW_KEY_IIF,
 FLOW_KEY_PRIORITY,
 FLOW_KEY_MARK,
 FLOW_KEY_NFCT,
 FLOW_KEY_NFCT_SRC,
 FLOW_KEY_NFCT_DST,
 FLOW_KEY_NFCT_PROTO_SRC,
 FLOW_KEY_NFCT_PROTO_DST,
 FLOW_KEY_RTCLASSID,
 FLOW_KEY_SKUID,
 FLOW_KEY_SKGID,
 FLOW_KEY_VLAN_TAG,
 FLOW_KEY_RXHASH,
 __FLOW_KEY_MAX,
};



enum {
 FLOW_MODE_MAP,
 FLOW_MODE_HASH,
};

enum {
 TCA_FLOW_UNSPEC,
 TCA_FLOW_KEYS,
 TCA_FLOW_MODE,
 TCA_FLOW_BASECLASS,
 TCA_FLOW_RSHIFT,
 TCA_FLOW_ADDEND,
 TCA_FLOW_MASK,
 TCA_FLOW_XOR,
 TCA_FLOW_DIVISOR,
 TCA_FLOW_ACT,
 TCA_FLOW_POLICE,
 TCA_FLOW_EMATCHES,
 TCA_FLOW_PERTURB,
 __TCA_FLOW_MAX
};





struct tc_basic_pcnt {
 __u64 rcnt;
 __u64 rhit;
};

enum {
 TCA_BASIC_UNSPEC,
 TCA_BASIC_CLASSID,
 TCA_BASIC_EMATCHES,
 TCA_BASIC_ACT,
 TCA_BASIC_POLICE,
 TCA_BASIC_PCNT,
 TCA_BASIC_PAD,
 __TCA_BASIC_MAX
};






enum {
 TCA_CGROUP_UNSPEC,
 TCA_CGROUP_ACT,
 TCA_CGROUP_POLICE,
 TCA_CGROUP_EMATCHES,
 __TCA_CGROUP_MAX,
};







enum {
 TCA_BPF_UNSPEC,
 TCA_BPF_ACT,
 TCA_BPF_POLICE,
 TCA_BPF_CLASSID,
 TCA_BPF_OPS_LEN,
 TCA_BPF_OPS,
 TCA_BPF_FD,
 TCA_BPF_NAME,
 TCA_BPF_FLAGS,
 TCA_BPF_FLAGS_GEN,
 TCA_BPF_TAG,
 TCA_BPF_ID,
 __TCA_BPF_MAX,
};





enum {
 TCA_FLOWER_UNSPEC,
 TCA_FLOWER_CLASSID,
 TCA_FLOWER_INDEV,
 TCA_FLOWER_ACT,
 TCA_FLOWER_KEY_ETH_DST,
 TCA_FLOWER_KEY_ETH_DST_MASK,
 TCA_FLOWER_KEY_ETH_SRC,
 TCA_FLOWER_KEY_ETH_SRC_MASK,
 TCA_FLOWER_KEY_ETH_TYPE,
 TCA_FLOWER_KEY_IP_PROTO,
 TCA_FLOWER_KEY_IPV4_SRC,
 TCA_FLOWER_KEY_IPV4_SRC_MASK,
 TCA_FLOWER_KEY_IPV4_DST,
 TCA_FLOWER_KEY_IPV4_DST_MASK,
 TCA_FLOWER_KEY_IPV6_SRC,
 TCA_FLOWER_KEY_IPV6_SRC_MASK,
 TCA_FLOWER_KEY_IPV6_DST,
 TCA_FLOWER_KEY_IPV6_DST_MASK,
 TCA_FLOWER_KEY_TCP_SRC,
 TCA_FLOWER_KEY_TCP_DST,
 TCA_FLOWER_KEY_UDP_SRC,
 TCA_FLOWER_KEY_UDP_DST,

 TCA_FLOWER_FLAGS,
 TCA_FLOWER_KEY_VLAN_ID,
 TCA_FLOWER_KEY_VLAN_PRIO,
 TCA_FLOWER_KEY_VLAN_ETH_TYPE,

 TCA_FLOWER_KEY_ENC_KEY_ID,
 TCA_FLOWER_KEY_ENC_IPV4_SRC,
 TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK,
 TCA_FLOWER_KEY_ENC_IPV4_DST,
 TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,
 TCA_FLOWER_KEY_ENC_IPV6_SRC,
 TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK,
 TCA_FLOWER_KEY_ENC_IPV6_DST,
 TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,

 TCA_FLOWER_KEY_TCP_SRC_MASK,
 TCA_FLOWER_KEY_TCP_DST_MASK,
 TCA_FLOWER_KEY_UDP_SRC_MASK,
 TCA_FLOWER_KEY_UDP_DST_MASK,
 TCA_FLOWER_KEY_SCTP_SRC_MASK,
 TCA_FLOWER_KEY_SCTP_DST_MASK,

 TCA_FLOWER_KEY_SCTP_SRC,
 TCA_FLOWER_KEY_SCTP_DST,

 TCA_FLOWER_KEY_ENC_UDP_SRC_PORT,
 TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK,
 TCA_FLOWER_KEY_ENC_UDP_DST_PORT,
 TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK,

 TCA_FLOWER_KEY_FLAGS,
 TCA_FLOWER_KEY_FLAGS_MASK,

 TCA_FLOWER_KEY_ICMPV4_CODE,
 TCA_FLOWER_KEY_ICMPV4_CODE_MASK,
 TCA_FLOWER_KEY_ICMPV4_TYPE,
 TCA_FLOWER_KEY_ICMPV4_TYPE_MASK,
 TCA_FLOWER_KEY_ICMPV6_CODE,
 TCA_FLOWER_KEY_ICMPV6_CODE_MASK,
 TCA_FLOWER_KEY_ICMPV6_TYPE,
 TCA_FLOWER_KEY_ICMPV6_TYPE_MASK,

 TCA_FLOWER_KEY_ARP_SIP,
 TCA_FLOWER_KEY_ARP_SIP_MASK,
 TCA_FLOWER_KEY_ARP_TIP,
 TCA_FLOWER_KEY_ARP_TIP_MASK,
 TCA_FLOWER_KEY_ARP_OP,
 TCA_FLOWER_KEY_ARP_OP_MASK,
 TCA_FLOWER_KEY_ARP_SHA,
 TCA_FLOWER_KEY_ARP_SHA_MASK,
 TCA_FLOWER_KEY_ARP_THA,
 TCA_FLOWER_KEY_ARP_THA_MASK,

 TCA_FLOWER_KEY_MPLS_TTL,
 TCA_FLOWER_KEY_MPLS_BOS,
 TCA_FLOWER_KEY_MPLS_TC,
 TCA_FLOWER_KEY_MPLS_LABEL,

 TCA_FLOWER_KEY_TCP_FLAGS,
 TCA_FLOWER_KEY_TCP_FLAGS_MASK,

 TCA_FLOWER_KEY_IP_TOS,
 TCA_FLOWER_KEY_IP_TOS_MASK,
 TCA_FLOWER_KEY_IP_TTL,
 TCA_FLOWER_KEY_IP_TTL_MASK,

 TCA_FLOWER_KEY_CVLAN_ID,
 TCA_FLOWER_KEY_CVLAN_PRIO,
 TCA_FLOWER_KEY_CVLAN_ETH_TYPE,

 TCA_FLOWER_KEY_ENC_IP_TOS,
 TCA_FLOWER_KEY_ENC_IP_TOS_MASK,
 TCA_FLOWER_KEY_ENC_IP_TTL,
 TCA_FLOWER_KEY_ENC_IP_TTL_MASK,

 TCA_FLOWER_KEY_ENC_OPTS,
 TCA_FLOWER_KEY_ENC_OPTS_MASK,

 TCA_FLOWER_IN_HW_COUNT,

 TCA_FLOWER_KEY_PORT_SRC_MIN,
 TCA_FLOWER_KEY_PORT_SRC_MAX,
 TCA_FLOWER_KEY_PORT_DST_MIN,
 TCA_FLOWER_KEY_PORT_DST_MAX,

 TCA_FLOWER_KEY_CT_STATE,
 TCA_FLOWER_KEY_CT_STATE_MASK,
 TCA_FLOWER_KEY_CT_ZONE,
 TCA_FLOWER_KEY_CT_ZONE_MASK,
 TCA_FLOWER_KEY_CT_MARK,
 TCA_FLOWER_KEY_CT_MARK_MASK,
 TCA_FLOWER_KEY_CT_LABELS,
 TCA_FLOWER_KEY_CT_LABELS_MASK,

 TCA_FLOWER_KEY_MPLS_OPTS,

 TCA_FLOWER_KEY_HASH,
 TCA_FLOWER_KEY_HASH_MASK,

 TCA_FLOWER_KEY_NUM_OF_VLANS,

 TCA_FLOWER_KEY_PPPOE_SID,
 TCA_FLOWER_KEY_PPP_PROTO,

 TCA_FLOWER_KEY_L2TPV3_SID,

 TCA_FLOWER_L2_MISS,

 TCA_FLOWER_KEY_CFM,

 TCA_FLOWER_KEY_SPI,
 TCA_FLOWER_KEY_SPI_MASK,

 TCA_FLOWER_KEY_ENC_FLAGS,
 TCA_FLOWER_KEY_ENC_FLAGS_MASK,

 __TCA_FLOWER_MAX,
};



enum {
 TCA_FLOWER_KEY_CT_FLAGS_NEW = 1 << 0,
 TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED = 1 << 1,
 TCA_FLOWER_KEY_CT_FLAGS_RELATED = 1 << 2,
 TCA_FLOWER_KEY_CT_FLAGS_TRACKED = 1 << 3,
 TCA_FLOWER_KEY_CT_FLAGS_INVALID = 1 << 4,
 TCA_FLOWER_KEY_CT_FLAGS_REPLY = 1 << 5,
 __TCA_FLOWER_KEY_CT_FLAGS_MAX,
};

enum {
 TCA_FLOWER_KEY_ENC_OPTS_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPTS_GENEVE,



 TCA_FLOWER_KEY_ENC_OPTS_VXLAN,



 TCA_FLOWER_KEY_ENC_OPTS_ERSPAN,



 TCA_FLOWER_KEY_ENC_OPTS_GTP,



 TCA_FLOWER_KEY_ENC_OPTS_PFCP,



 __TCA_FLOWER_KEY_ENC_OPTS_MAX,
};



enum {
 TCA_FLOWER_KEY_ENC_OPT_GENEVE_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS,
 TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE,
 TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA,

 __TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX,
};




enum {
 TCA_FLOWER_KEY_ENC_OPT_VXLAN_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP,
 __TCA_FLOWER_KEY_ENC_OPT_VXLAN_MAX,
};




enum {
 TCA_FLOWER_KEY_ENC_OPT_ERSPAN_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPT_ERSPAN_VER,
 TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX,
 TCA_FLOWER_KEY_ENC_OPT_ERSPAN_DIR,
 TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID,
 __TCA_FLOWER_KEY_ENC_OPT_ERSPAN_MAX,
};




enum {
 TCA_FLOWER_KEY_ENC_OPT_GTP_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE,
 TCA_FLOWER_KEY_ENC_OPT_GTP_QFI,

 __TCA_FLOWER_KEY_ENC_OPT_GTP_MAX,
};




enum {
 TCA_FLOWER_KEY_ENC_OPT_PFCP_UNSPEC,
 TCA_FLOWER_KEY_ENC_OPT_PFCP_TYPE,
 TCA_FLOWER_KEY_ENC_OPT_PFCP_SEID,
 __TCA_FLOWER_KEY_ENC_OPT_PFCP_MAX,
};




enum {
 TCA_FLOWER_KEY_MPLS_OPTS_UNSPEC,
 TCA_FLOWER_KEY_MPLS_OPTS_LSE,
 __TCA_FLOWER_KEY_MPLS_OPTS_MAX,
};



enum {
 TCA_FLOWER_KEY_MPLS_OPT_LSE_UNSPEC,
 TCA_FLOWER_KEY_MPLS_OPT_LSE_DEPTH,
 TCA_FLOWER_KEY_MPLS_OPT_LSE_TTL,
 TCA_FLOWER_KEY_MPLS_OPT_LSE_BOS,
 TCA_FLOWER_KEY_MPLS_OPT_LSE_TC,
 TCA_FLOWER_KEY_MPLS_OPT_LSE_LABEL,
 __TCA_FLOWER_KEY_MPLS_OPT_LSE_MAX,
};




enum {
 TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT = (1 << 0),
 TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST = (1 << 1),
 TCA_FLOWER_KEY_FLAGS_TUNNEL_CSUM = (1 << 2),
 TCA_FLOWER_KEY_FLAGS_TUNNEL_DONT_FRAGMENT = (1 << 3),
 TCA_FLOWER_KEY_FLAGS_TUNNEL_OAM = (1 << 4),
 TCA_FLOWER_KEY_FLAGS_TUNNEL_CRIT_OPT = (1 << 5),
 __TCA_FLOWER_KEY_FLAGS_MAX,
};



enum {
 TCA_FLOWER_KEY_CFM_OPT_UNSPEC,
 TCA_FLOWER_KEY_CFM_MD_LEVEL,
 TCA_FLOWER_KEY_CFM_OPCODE,
 __TCA_FLOWER_KEY_CFM_OPT_MAX,
};







struct tc_matchall_pcnt {
 __u64 rhit;
};

enum {
 TCA_MATCHALL_UNSPEC,
 TCA_MATCHALL_CLASSID,
 TCA_MATCHALL_ACT,
 TCA_MATCHALL_FLAGS,
 TCA_MATCHALL_PCNT,
 TCA_MATCHALL_PAD,
 __TCA_MATCHALL_MAX,
};





struct tcf_ematch_tree_hdr {
 __u16 nmatches;
 __u16 progid;
};

enum {
 TCA_EMATCH_TREE_UNSPEC,
 TCA_EMATCH_TREE_HDR,
 TCA_EMATCH_TREE_LIST,
 __TCA_EMATCH_TREE_MAX
};


struct tcf_ematch_hdr {
 __u16 matchid;
 __u16 kind;
 __u16 flags;
 __u16 pad;
};
# 763 "../include/uapi/linux/pkt_cls.h"
enum {
 TCF_LAYER_LINK,
 TCF_LAYER_NETWORK,
 TCF_LAYER_TRANSPORT,
 __TCF_LAYER_MAX
};
# 787 "../include/uapi/linux/pkt_cls.h"
enum {
 TCF_EM_PROG_TC
};

enum {
 TCF_EM_OPND_EQ,
 TCF_EM_OPND_GT,
 TCF_EM_OPND_LT
};
# 11 "../include/net/flow_dissector.h" 2

struct bpf_prog;
struct net;
struct sk_buff;
# 23 "../include/net/flow_dissector.h"
struct flow_dissector_key_control {
 u16 thoff;
 u16 addr_type;
 u32 flags;
};




enum flow_dissector_ctrl_flags {
 FLOW_DIS_IS_FRAGMENT = TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT,
 FLOW_DIS_FIRST_FRAG = TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST,
 FLOW_DIS_F_TUNNEL_CSUM = TCA_FLOWER_KEY_FLAGS_TUNNEL_CSUM,
 FLOW_DIS_F_TUNNEL_DONT_FRAGMENT = TCA_FLOWER_KEY_FLAGS_TUNNEL_DONT_FRAGMENT,
 FLOW_DIS_F_TUNNEL_OAM = TCA_FLOWER_KEY_FLAGS_TUNNEL_OAM,
 FLOW_DIS_F_TUNNEL_CRIT_OPT = TCA_FLOWER_KEY_FLAGS_TUNNEL_CRIT_OPT,


 FLOW_DIS_ENCAPSULATION = ((__TCA_FLOWER_KEY_FLAGS_MAX - 1) << 1),
};

enum flow_dissect_ret {
 FLOW_DISSECT_RET_OUT_GOOD,
 FLOW_DISSECT_RET_OUT_BAD,
 FLOW_DISSECT_RET_PROTO_AGAIN,
 FLOW_DISSECT_RET_IPPROTO_AGAIN,
 FLOW_DISSECT_RET_CONTINUE,
};







struct flow_dissector_key_basic {
 __be16 n_proto;
 u8 ip_proto;
 u8 padding;
};

struct flow_dissector_key_tags {
 u32 flow_label;
};

struct flow_dissector_key_vlan {
 union {
  struct {
   u16 vlan_id:12,
    vlan_dei:1,
    vlan_priority:3;
  };
  __be16 vlan_tci;
 };
 __be16 vlan_tpid;
 __be16 vlan_eth_type;
 u16 padding;
};

struct flow_dissector_mpls_lse {
 u32 mpls_ttl:8,
  mpls_bos:1,
  mpls_tc:3,
  mpls_label:20;
};


struct flow_dissector_key_mpls {
 struct flow_dissector_mpls_lse ls[7];
 u8 used_lses;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dissector_set_mpls_lse(struct flow_dissector_key_mpls *mpls,
       int lse_index)
{
 mpls->used_lses |= 1 << lse_index;
}
# 108 "../include/net/flow_dissector.h"
struct flow_dissector_key_enc_opts {
 u8 data[255];


 u8 len;
 u32 dst_opt_type;
};

struct flow_dissector_key_keyid {
 __be32 keyid;
};






struct flow_dissector_key_ipv4_addrs {

 __be32 src;
 __be32 dst;
};






struct flow_dissector_key_ipv6_addrs {

 struct in6_addr src;
 struct in6_addr dst;
};





struct flow_dissector_key_tipc {
 __be32 key;
};







struct flow_dissector_key_addrs {
 union {
  struct flow_dissector_key_ipv4_addrs v4addrs;
  struct flow_dissector_key_ipv6_addrs v6addrs;
  struct flow_dissector_key_tipc tipckey;
 };
};
# 172 "../include/net/flow_dissector.h"
struct flow_dissector_key_arp {
 __u32 sip;
 __u32 tip;
 __u8 op;
 unsigned char sha[6];
 unsigned char tha[6];
};







struct flow_dissector_key_ports {
 union {
  __be32 ports;
  struct {
   __be16 src;
   __be16 dst;
  };
 };
};







struct flow_dissector_key_ports_range {
 union {
  struct flow_dissector_key_ports tp;
  struct {
   struct flow_dissector_key_ports tp_min;
   struct flow_dissector_key_ports tp_max;
  };
 };
};







struct flow_dissector_key_icmp {
 struct {
  u8 type;
  u8 code;
 };
 u16 id;
};






struct flow_dissector_key_eth_addrs {

 unsigned char dst[6];
 unsigned char src[6];
};





struct flow_dissector_key_tcp {
 __be16 flags;
};






struct flow_dissector_key_ip {
 __u8 tos;
 __u8 ttl;
};







struct flow_dissector_key_meta {
 int ingress_ifindex;
 u16 ingress_iftype;
 u8 l2_miss;
};
# 274 "../include/net/flow_dissector.h"
struct flow_dissector_key_ct {
 u16 ct_state;
 u16 ct_zone;
 u32 ct_mark;
 u32 ct_labels[4];
};





struct flow_dissector_key_hash {
 u32 hash;
};





struct flow_dissector_key_num_of_vlans {
 u8 num_of_vlans;
};







struct flow_dissector_key_pppoe {
 __be16 session_id;
 __be16 ppp_proto;
 __be16 type;
};





struct flow_dissector_key_l2tpv3 {
 __be32 session_id;
};





struct flow_dissector_key_ipsec {
 __be32 spi;
};
# 337 "../include/net/flow_dissector.h"
struct flow_dissector_key_cfm {
 u8 mdl_ver;
 u8 opcode;
};




enum flow_dissector_key_id {
 FLOW_DISSECTOR_KEY_CONTROL,
 FLOW_DISSECTOR_KEY_BASIC,
 FLOW_DISSECTOR_KEY_IPV4_ADDRS,
 FLOW_DISSECTOR_KEY_IPV6_ADDRS,
 FLOW_DISSECTOR_KEY_PORTS,
 FLOW_DISSECTOR_KEY_PORTS_RANGE,
 FLOW_DISSECTOR_KEY_ICMP,
 FLOW_DISSECTOR_KEY_ETH_ADDRS,
 FLOW_DISSECTOR_KEY_TIPC,
 FLOW_DISSECTOR_KEY_ARP,
 FLOW_DISSECTOR_KEY_VLAN,
 FLOW_DISSECTOR_KEY_FLOW_LABEL,
 FLOW_DISSECTOR_KEY_GRE_KEYID,
 FLOW_DISSECTOR_KEY_MPLS_ENTROPY,
 FLOW_DISSECTOR_KEY_ENC_KEYID,
 FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
 FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS,
 FLOW_DISSECTOR_KEY_ENC_CONTROL,
 FLOW_DISSECTOR_KEY_ENC_PORTS,
 FLOW_DISSECTOR_KEY_MPLS,
 FLOW_DISSECTOR_KEY_TCP,
 FLOW_DISSECTOR_KEY_IP,
 FLOW_DISSECTOR_KEY_CVLAN,
 FLOW_DISSECTOR_KEY_ENC_IP,
 FLOW_DISSECTOR_KEY_ENC_OPTS,
 FLOW_DISSECTOR_KEY_META,
 FLOW_DISSECTOR_KEY_CT,
 FLOW_DISSECTOR_KEY_HASH,
 FLOW_DISSECTOR_KEY_NUM_OF_VLANS,
 FLOW_DISSECTOR_KEY_PPPOE,
 FLOW_DISSECTOR_KEY_L2TPV3,
 FLOW_DISSECTOR_KEY_CFM,
 FLOW_DISSECTOR_KEY_IPSEC,

 FLOW_DISSECTOR_KEY_MAX,
};






struct flow_dissector_key {
 enum flow_dissector_key_id key_id;
 size_t offset;

};

struct flow_dissector {
 unsigned long long used_keys;

 unsigned short int offset[FLOW_DISSECTOR_KEY_MAX];
};

struct flow_keys_basic {
 struct flow_dissector_key_control control;
 struct flow_dissector_key_basic basic;
};

struct flow_keys {
 struct flow_dissector_key_control control;

 struct flow_dissector_key_basic basic __attribute__((__aligned__(__alignof__(u64))));
 struct flow_dissector_key_tags tags;
 struct flow_dissector_key_vlan vlan;
 struct flow_dissector_key_vlan cvlan;
 struct flow_dissector_key_keyid keyid;
 struct flow_dissector_key_ports ports;
 struct flow_dissector_key_icmp icmp;

 struct flow_dissector_key_addrs addrs;
};




__be32 flow_get_u32_src(const struct flow_keys *flow);
__be32 flow_get_u32_dst(const struct flow_keys *flow);

extern struct flow_dissector flow_keys_dissector;
extern struct flow_dissector flow_keys_basic_dissector;
# 436 "../include/net/flow_dissector.h"
struct flow_keys_digest {
 u8 data[16];
};

void make_flow_keys_digest(struct flow_keys_digest *digest,
      const struct flow_keys *flow);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool flow_keys_have_l4(const struct flow_keys *keys)
{
 return (keys->ports.ports || keys->tags.flow_label);
}

u32 flow_hash_from_keys(struct flow_keys *keys);
u32 flow_hash_from_keys_seed(struct flow_keys *keys,
        const siphash_key_t *keyval);
void skb_flow_get_icmp_tci(const struct sk_buff *skb,
      struct flow_dissector_key_icmp *key_icmp,
      const void *data, int thoff, int hlen);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dissector_uses_key(const struct flow_dissector *flow_dissector,
          enum flow_dissector_key_id key_id)
{
 return flow_dissector->used_keys & (1ULL << key_id);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_flow_dissector_target(struct flow_dissector *flow_dissector,
           enum flow_dissector_key_id key_id,
           void *target_container)
{
 return ((char *)target_container) + flow_dissector->offset[key_id];
}

struct bpf_flow_dissector {
 struct bpf_flow_keys *flow_keys;
 const struct sk_buff *skb;
 const void *data;
 const void *data_end;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
flow_dissector_init_keys(struct flow_dissector_key_control *key_control,
    struct flow_dissector_key_basic *key_basic)
{
 memset(key_control, 0, sizeof(*key_control));
 memset(key_basic, 0, sizeof(*key_basic));
}


int flow_dissector_bpf_prog_attach_check(struct net *net,
      struct bpf_prog *prog);
# 31 "../include/linux/skbuff.h" 2

# 1 "../include/uapi/linux/if_packet.h" 1







struct sockaddr_pkt {
 unsigned short spkt_family;
 unsigned char spkt_device[14];
 __be16 spkt_protocol;
};

struct sockaddr_ll {
 unsigned short sll_family;
 __be16 sll_protocol;
 int sll_ifindex;
 unsigned short sll_hatype;
 unsigned char sll_pkttype;
 unsigned char sll_halen;
 unsigned char sll_addr[8];
};
# 77 "../include/uapi/linux/if_packet.h"
struct tpacket_stats {
 unsigned int tp_packets;
 unsigned int tp_drops;
};

struct tpacket_stats_v3 {
 unsigned int tp_packets;
 unsigned int tp_drops;
 unsigned int tp_freeze_q_cnt;
};

struct tpacket_rollover_stats {
 __u64 __attribute__((aligned(8))) tp_all;
 __u64 __attribute__((aligned(8))) tp_huge;
 __u64 __attribute__((aligned(8))) tp_failed;
};

union tpacket_stats_u {
 struct tpacket_stats stats1;
 struct tpacket_stats_v3 stats3;
};

struct tpacket_auxdata {
 __u32 tp_status;
 __u32 tp_len;
 __u32 tp_snaplen;
 __u16 tp_mac;
 __u16 tp_net;
 __u16 tp_vlan_tci;
 __u16 tp_vlan_tpid;
};
# 135 "../include/uapi/linux/if_packet.h"
struct tpacket_hdr {
 unsigned long tp_status;
 unsigned int tp_len;
 unsigned int tp_snaplen;
 unsigned short tp_mac;
 unsigned short tp_net;
 unsigned int tp_sec;
 unsigned int tp_usec;
};





struct tpacket2_hdr {
 __u32 tp_status;
 __u32 tp_len;
 __u32 tp_snaplen;
 __u16 tp_mac;
 __u16 tp_net;
 __u32 tp_sec;
 __u32 tp_nsec;
 __u16 tp_vlan_tci;
 __u16 tp_vlan_tpid;
 __u8 tp_padding[4];
};

struct tpacket_hdr_variant1 {
 __u32 tp_rxhash;
 __u32 tp_vlan_tci;
 __u16 tp_vlan_tpid;
 __u16 tp_padding;
};

struct tpacket3_hdr {
 __u32 tp_next_offset;
 __u32 tp_sec;
 __u32 tp_nsec;
 __u32 tp_snaplen;
 __u32 tp_len;
 __u32 tp_status;
 __u16 tp_mac;
 __u16 tp_net;

 union {
  struct tpacket_hdr_variant1 hv1;
 };
 __u8 tp_padding[8];
};

struct tpacket_bd_ts {
 unsigned int ts_sec;
 union {
  unsigned int ts_usec;
  unsigned int ts_nsec;
 };
};

struct tpacket_hdr_v1 {
 __u32 block_status;
 __u32 num_pkts;
 __u32 offset_to_first_pkt;




 __u32 blk_len;
# 212 "../include/uapi/linux/if_packet.h"
 __u64 __attribute__((aligned(8))) seq_num;
# 239 "../include/uapi/linux/if_packet.h"
 struct tpacket_bd_ts ts_first_pkt, ts_last_pkt;
};

union tpacket_bd_header_u {
 struct tpacket_hdr_v1 bh1;
};

struct tpacket_block_desc {
 __u32 version;
 __u32 offset_to_priv;
 union tpacket_bd_header_u hdr;
};




enum tpacket_versions {
 TPACKET_V1,
 TPACKET_V2,
 TPACKET_V3
};
# 274 "../include/uapi/linux/if_packet.h"
struct tpacket_req {
 unsigned int tp_block_size;
 unsigned int tp_block_nr;
 unsigned int tp_frame_size;
 unsigned int tp_frame_nr;
};

struct tpacket_req3 {
 unsigned int tp_block_size;
 unsigned int tp_block_nr;
 unsigned int tp_frame_size;
 unsigned int tp_frame_nr;
 unsigned int tp_retire_blk_tov;
 unsigned int tp_sizeof_priv;
 unsigned int tp_feature_req_word;
};

union tpacket_req_u {
 struct tpacket_req req;
 struct tpacket_req3 req3;
};

struct packet_mreq {
 int mr_ifindex;
 unsigned short mr_type;
 unsigned short mr_alen;
 unsigned char mr_address[8];
};

struct fanout_args {

 __u16 id;
 __u16 type_flags;




 __u32 max_num_members;
};
# 33 "../include/linux/skbuff.h" 2

# 1 "../include/net/flow.h" 1
# 16 "../include/net/flow.h"
struct flow_keys;
# 26 "../include/net/flow.h"
struct flowi_tunnel {
 __be64 tun_id;
};

struct flowi_common {
 int flowic_oif;
 int flowic_iif;
 int flowic_l3mdev;
 __u32 flowic_mark;
 __u8 flowic_tos;
 __u8 flowic_scope;
 __u8 flowic_proto;
 __u8 flowic_flags;


 __u32 flowic_secid;
 kuid_t flowic_uid;
 __u32 flowic_multipath_hash;
 struct flowi_tunnel flowic_tun_key;
};

union flowi_uli {
 struct {
  __be16 dport;
  __be16 sport;
 } ports;

 struct {
  __u8 type;
  __u8 code;
 } icmpt;

 __be32 gre_key;

 struct {
  __u8 type;
 } mht;
};

struct flowi4 {
 struct flowi_common __fl_common;
# 81 "../include/net/flow.h"
 __be32 saddr;
 __be32 daddr;

 union flowi_uli uli;






} __attribute__((__aligned__(32/8)));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flowi4_init_output(struct flowi4 *fl4, int oif,
          __u32 mark, __u8 tos, __u8 scope,
          __u8 proto, __u8 flags,
          __be32 daddr, __be32 saddr,
          __be16 dport, __be16 sport,
          kuid_t uid)
{
 fl4->__fl_common.flowic_oif = oif;
 fl4->__fl_common.flowic_iif = 1;
 fl4->__fl_common.flowic_l3mdev = 0;
 fl4->__fl_common.flowic_mark = mark;
 fl4->__fl_common.flowic_tos = tos;
 fl4->__fl_common.flowic_scope = scope;
 fl4->__fl_common.flowic_proto = proto;
 fl4->__fl_common.flowic_flags = flags;
 fl4->__fl_common.flowic_secid = 0;
 fl4->__fl_common.flowic_tun_key.tun_id = 0;
 fl4->__fl_common.flowic_uid = uid;
 fl4->daddr = daddr;
 fl4->saddr = saddr;
 fl4->uli.ports.dport = dport;
 fl4->uli.ports.sport = sport;
 fl4->__fl_common.flowic_multipath_hash = 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void flowi4_update_output(struct flowi4 *fl4, int oif,
     __be32 daddr, __be32 saddr)
{
 fl4->__fl_common.flowic_oif = oif;
 fl4->daddr = daddr;
 fl4->saddr = saddr;
}


struct flowi6 {
 struct flowi_common __fl_common;
# 140 "../include/net/flow.h"
 struct in6_addr daddr;
 struct in6_addr saddr;

 __be32 flowlabel;
 union flowi_uli uli;






 __u32 mp_hash;
} __attribute__((__aligned__(32/8)));

struct flowi {
 union {
  struct flowi_common __fl_common;
  struct flowi4 ip4;
  struct flowi6 ip6;
 } u;
# 171 "../include/net/flow.h"
} __attribute__((__aligned__(32/8)));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct flowi *flowi4_to_flowi(struct flowi4 *fl4)
{
 return ({ void *__mptr = (void *)(fl4); _Static_assert(__builtin_types_compatible_p(typeof(*(fl4)), typeof(((struct flowi *)0)->u.ip4)) || __builtin_types_compatible_p(typeof(*(fl4)), typeof(void)), "pointer type mismatch in container_of()"); ((struct flowi *)(__mptr - __builtin_offsetof(struct flowi, u.ip4))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4)
{
 return &(fl4->__fl_common);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
{
 return ({ void *__mptr = (void *)(fl6); _Static_assert(__builtin_types_compatible_p(typeof(*(fl6)), typeof(((struct flowi *)0)->u.ip6)) || __builtin_types_compatible_p(typeof(*(fl6)), typeof(void)), "pointer type mismatch in container_of()"); ((struct flowi *)(__mptr - __builtin_offsetof(struct flowi, u.ip6))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6)
{
 return &(fl6->__fl_common);
}

__u32 __get_hash_from_flowi6(const struct flowi6 *fl6, struct flow_keys *keys);
# 35 "../include/linux/skbuff.h" 2



# 1 "../include/net/net_debug.h" 1







struct net_device;

__attribute__((__format__(printf, 3, 4))) __attribute__((__cold__))
void netdev_printk(const char *level, const struct net_device *dev,
     const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_emerg(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_alert(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_crit(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_err(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_warn(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_notice(const struct net_device *dev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void netdev_info(const struct net_device *dev, const char *format, ...);
# 39 "../include/linux/skbuff.h" 2
# 1 "../include/net/dropreason-core.h" 1
# 102 "../include/net/dropreason-core.h"
enum skb_drop_reason {



 SKB_NOT_DROPPED_YET = 0,

 SKB_CONSUMED,

 SKB_DROP_REASON_NOT_SPECIFIED,







 SKB_DROP_REASON_NO_SOCKET,

 SKB_DROP_REASON_PKT_TOO_SMALL,

 SKB_DROP_REASON_TCP_CSUM,

 SKB_DROP_REASON_SOCKET_FILTER,

 SKB_DROP_REASON_UDP_CSUM,

 SKB_DROP_REASON_NETFILTER_DROP,




 SKB_DROP_REASON_OTHERHOST,

 SKB_DROP_REASON_IP_CSUM,




 SKB_DROP_REASON_IP_INHDR,




 SKB_DROP_REASON_IP_RPFILTER,




 SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST,

 SKB_DROP_REASON_XFRM_POLICY,

 SKB_DROP_REASON_IP_NOPROTO,

 SKB_DROP_REASON_SOCKET_RCVBUFF,




 SKB_DROP_REASON_PROTO_MEM,




 SKB_DROP_REASON_TCP_AUTH_HDR,




 SKB_DROP_REASON_TCP_MD5NOTFOUND,




 SKB_DROP_REASON_TCP_MD5UNEXPECTED,




 SKB_DROP_REASON_TCP_MD5FAILURE,




 SKB_DROP_REASON_TCP_AONOTFOUND,




 SKB_DROP_REASON_TCP_AOUNEXPECTED,




 SKB_DROP_REASON_TCP_AOKEYNOTFOUND,




 SKB_DROP_REASON_TCP_AOFAILURE,




 SKB_DROP_REASON_SOCKET_BACKLOG,

 SKB_DROP_REASON_TCP_FLAGS,




 SKB_DROP_REASON_TCP_ABORT_ON_DATA,




 SKB_DROP_REASON_TCP_ZEROWINDOW,





 SKB_DROP_REASON_TCP_OLD_DATA,





 SKB_DROP_REASON_TCP_OVERWINDOW,




 SKB_DROP_REASON_TCP_OFOMERGE,




 SKB_DROP_REASON_TCP_RFC7323_PAWS,

 SKB_DROP_REASON_TCP_OLD_SEQUENCE,

 SKB_DROP_REASON_TCP_INVALID_SEQUENCE,





 SKB_DROP_REASON_TCP_INVALID_ACK_SEQUENCE,

 SKB_DROP_REASON_TCP_RESET,




 SKB_DROP_REASON_TCP_INVALID_SYN,

 SKB_DROP_REASON_TCP_CLOSE,

 SKB_DROP_REASON_TCP_FASTOPEN,

 SKB_DROP_REASON_TCP_OLD_ACK,

 SKB_DROP_REASON_TCP_TOO_OLD_ACK,




 SKB_DROP_REASON_TCP_ACK_UNSENT_DATA,

 SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE,

 SKB_DROP_REASON_TCP_OFO_DROP,

 SKB_DROP_REASON_IP_OUTNOROUTES,




 SKB_DROP_REASON_BPF_CGROUP_EGRESS,

 SKB_DROP_REASON_IPV6DISABLED,

 SKB_DROP_REASON_NEIGH_CREATEFAIL,

 SKB_DROP_REASON_NEIGH_FAILED,

 SKB_DROP_REASON_NEIGH_QUEUEFULL,

 SKB_DROP_REASON_NEIGH_DEAD,

 SKB_DROP_REASON_TC_EGRESS,

 SKB_DROP_REASON_SECURITY_HOOK,




 SKB_DROP_REASON_QDISC_DROP,





 SKB_DROP_REASON_CPU_BACKLOG,

 SKB_DROP_REASON_XDP,

 SKB_DROP_REASON_TC_INGRESS,

 SKB_DROP_REASON_UNHANDLED_PROTO,

 SKB_DROP_REASON_SKB_CSUM,

 SKB_DROP_REASON_SKB_GSO_SEG,




 SKB_DROP_REASON_SKB_UCOPY_FAULT,

 SKB_DROP_REASON_DEV_HDR,






 SKB_DROP_REASON_DEV_READY,

 SKB_DROP_REASON_FULL_RING,

 SKB_DROP_REASON_NOMEM,





 SKB_DROP_REASON_HDR_TRUNC,




 SKB_DROP_REASON_TAP_FILTER,




 SKB_DROP_REASON_TAP_TXFILTER,

 SKB_DROP_REASON_ICMP_CSUM,




 SKB_DROP_REASON_INVALID_PROTO,




 SKB_DROP_REASON_IP_INADDRERRORS,




 SKB_DROP_REASON_IP_INNOROUTES,




 SKB_DROP_REASON_PKT_TOO_BIG,

 SKB_DROP_REASON_DUP_FRAG,

 SKB_DROP_REASON_FRAG_REASM_TIMEOUT,




 SKB_DROP_REASON_FRAG_TOO_FAR,




 SKB_DROP_REASON_TCP_MINTTL,

 SKB_DROP_REASON_IPV6_BAD_EXTHDR,

 SKB_DROP_REASON_IPV6_NDISC_FRAG,

 SKB_DROP_REASON_IPV6_NDISC_HOP_LIMIT,

 SKB_DROP_REASON_IPV6_NDISC_BAD_CODE,

 SKB_DROP_REASON_IPV6_NDISC_BAD_OPTIONS,




 SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST,

 SKB_DROP_REASON_QUEUE_PURGE,




 SKB_DROP_REASON_TC_COOKIE_ERROR,




 SKB_DROP_REASON_PACKET_SOCK_ERROR,

 SKB_DROP_REASON_TC_CHAIN_NOTFOUND,




 SKB_DROP_REASON_TC_RECLASSIFY_LOOP,




 SKB_DROP_REASON_MAX,





 SKB_DROP_REASON_SUBSYS_MASK = 0xffff0000,
};
# 40 "../include/linux/skbuff.h" 2
# 1 "../include/net/netmem.h" 1
# 20 "../include/net/netmem.h"
typedef unsigned long netmem_ref;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *netmem_to_page(netmem_ref netmem)
{
 return ( struct page *)netmem;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netmem_ref page_to_netmem(struct page *page)
{
 return ( netmem_ref)page;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netmem_ref_count(netmem_ref netmem)
{
 return page_ref_count(netmem_to_page(netmem));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long netmem_to_pfn(netmem_ref netmem)
{
 return ((unsigned long)((netmem_to_page(netmem)) - mem_map) + (__phys_offset >> 14));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netmem_ref netmem_compound_head(netmem_ref netmem)
{
 return page_to_netmem(((typeof(netmem_to_page(netmem)))_compound_head(netmem_to_page(netmem))));
}
# 41 "../include/linux/skbuff.h" 2
# 276 "../include/linux/skbuff.h"
struct ahash_request;
struct net_device;
struct scatterlist;
struct pipe_inode_info;
struct iov_iter;
struct napi_struct;
struct bpf_prog;
union bpf_attr;
struct skb_ext;
struct ts_config;
# 322 "../include/linux/skbuff.h"
struct tc_skb_ext {
 union {
  u64 act_miss_cookie;
  __u32 chain;
 };
 __u16 mru;
 __u16 zone;
 u8 post_ct:1;
 u8 post_ct_snat:1;
 u8 post_ct_dnat:1;
 u8 act_miss:1;
 u8 l2_miss:1;
};


struct sk_buff_head {

 union { struct { struct sk_buff *next; struct sk_buff *prev; } ; struct sk_buff_list { struct sk_buff *next; struct sk_buff *prev; } list; } ;




 __u32 qlen;
 spinlock_t lock;
};

struct sk_buff;
# 361 "../include/linux/skbuff.h"
typedef struct skb_frag {
 netmem_ref netmem;
 unsigned int len;
 unsigned int offset;
} skb_frag_t;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_frag_size(const skb_frag_t *frag)
{
 return frag->len;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_size_set(skb_frag_t *frag, unsigned int size)
{
 frag->len = size;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_size_add(skb_frag_t *frag, int delta)
{
 frag->len += delta;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_size_sub(skb_frag_t *frag, int delta)
{
 frag->len -= delta;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_frag_must_loop(struct page *p)
{




 return false;
}
# 462 "../include/linux/skbuff.h"
struct skb_shared_hwtstamps {
 union {
  ktime_t hwtstamp;
  void *netdev_data;
 };
};


enum {

 SKBTX_HW_TSTAMP = 1 << 0,


 SKBTX_SW_TSTAMP = 1 << 1,


 SKBTX_IN_PROGRESS = 1 << 2,


 SKBTX_HW_TSTAMP_USE_CYCLES = 1 << 3,


 SKBTX_WIFI_STATUS = 1 << 4,


 SKBTX_HW_TSTAMP_NETDEV = 1 << 5,


 SKBTX_SCHED_TSTAMP = 1 << 6,
};
# 500 "../include/linux/skbuff.h"
enum {

 SKBFL_ZEROCOPY_ENABLE = ((((1UL))) << (0)),






 SKBFL_SHARED_FRAG = ((((1UL))) << (1)),




 SKBFL_PURE_ZEROCOPY = ((((1UL))) << (2)),

 SKBFL_DONT_ORPHAN = ((((1UL))) << (3)),




 SKBFL_MANAGED_FRAG_REFS = ((((1UL))) << (4)),
};





struct ubuf_info_ops {
 void (*complete)(struct sk_buff *, struct ubuf_info *,
    bool zerocopy_success);

 int (*link_skb)(struct sk_buff *skb, struct ubuf_info *uarg);
};
# 543 "../include/linux/skbuff.h"
struct ubuf_info {
 const struct ubuf_info_ops *ops;
 refcount_t refcnt;
 u8 flags;
};

struct ubuf_info_msgzc {
 struct ubuf_info ubuf;

 union {
  struct {
   unsigned long desc;
   void *ctx;
  };
  struct {
   u32 id;
   u16 len;
   u16 zerocopy:1;
   u32 bytelen;
  };
 };

 struct mmpin {
  struct user_struct *user;
  unsigned int num_pg;
 } mmp;
};





int mm_account_pinned_pages(struct mmpin *mmp, size_t size);
void mm_unaccount_pinned_pages(struct mmpin *mmp);






struct xsk_tx_metadata_compl {
 __u64 *tx_timestamp;
};




struct skb_shared_info {
 __u8 flags;
 __u8 meta_len;
 __u8 nr_frags;
 __u8 tx_flags;
 unsigned short gso_size;

 unsigned short gso_segs;
 struct sk_buff *frag_list;
 union {
  struct skb_shared_hwtstamps hwtstamps;
  struct xsk_tx_metadata_compl xsk_meta;
 };
 unsigned int gso_type;
 u32 tskey;




 atomic_t dataref;
 unsigned int xdp_frags_size;



 void * destructor_arg;


 skb_frag_t frags[17];
};
# 651 "../include/linux/skbuff.h"
enum {
 SKB_FCLONE_UNAVAILABLE,
 SKB_FCLONE_ORIG,
 SKB_FCLONE_CLONE,
};

enum {
 SKB_GSO_TCPV4 = 1 << 0,


 SKB_GSO_DODGY = 1 << 1,


 SKB_GSO_TCP_ECN = 1 << 2,

 SKB_GSO_TCP_FIXEDID = 1 << 3,

 SKB_GSO_TCPV6 = 1 << 4,

 SKB_GSO_FCOE = 1 << 5,

 SKB_GSO_GRE = 1 << 6,

 SKB_GSO_GRE_CSUM = 1 << 7,

 SKB_GSO_IPXIP4 = 1 << 8,

 SKB_GSO_IPXIP6 = 1 << 9,

 SKB_GSO_UDP_TUNNEL = 1 << 10,

 SKB_GSO_UDP_TUNNEL_CSUM = 1 << 11,

 SKB_GSO_PARTIAL = 1 << 12,

 SKB_GSO_TUNNEL_REMCSUM = 1 << 13,

 SKB_GSO_SCTP = 1 << 14,

 SKB_GSO_ESP = 1 << 15,

 SKB_GSO_UDP = 1 << 16,

 SKB_GSO_UDP_L4 = 1 << 17,

 SKB_GSO_FRAGLIST = 1 << 18,
};
# 706 "../include/linux/skbuff.h"
typedef unsigned char *sk_buff_data_t;


enum skb_tstamp_type {
 SKB_CLOCK_REALTIME,
 SKB_CLOCK_MONOTONIC,
 SKB_CLOCK_TAI,
 __SKB_CLOCK_MAX = SKB_CLOCK_TAI,
};
# 864 "../include/linux/skbuff.h"
struct sk_buff {
 union {
  struct {

   struct sk_buff *next;
   struct sk_buff *prev;

   union {
    struct net_device *dev;




    unsigned long dev_scratch;
   };
  };
  struct rb_node rbnode;
  struct list_head list;
  struct llist_node ll_node;
 };

 struct sock *sk;

 union {
  ktime_t tstamp;
  u64 skb_mstamp_ns;
 };






 char cb[48] __attribute__((__aligned__(8)));

 union {
  struct {
   unsigned long _skb_refdst;
   void (*destructor)(struct sk_buff *skb);
  };
  struct list_head tcp_tsorted_anchor;

  unsigned long _sk_redir;

 };




 unsigned int len,
    data_len;
 __u16 mac_len,
    hdr_len;




 __u16 queue_mapping;
# 932 "../include/linux/skbuff.h"
 __u8 __cloned_offset[0];

 __u8 cloned:1,
    nohdr:1,
    fclone:2,
    peeked:1,
    head_frag:1,
    pfmemalloc:1,
    pp_recycle:1;

 __u8 active_extensions;





 union { struct { __u8 __pkt_type_offset[0]; __u8 pkt_type:3; __u8 ignore_df:1; __u8 dst_pending_confirm:1; __u8 ip_summed:2; __u8 ooo_okay:1; __u8 __mono_tc_offset[0]; __u8 tstamp_type:2; __u8 tc_at_ingress:1; __u8 tc_skip_classify:1; __u8 remcsum_offload:1; __u8 csum_complete_sw:1; __u8 csum_level:2; __u8 inner_protocol_type:1; __u8 l4_hash:1; __u8 sw_hash:1; __u8 wifi_acked_valid:1; __u8 wifi_acked:1; __u8 no_fcs:1; __u8 encapsulation:1; __u8 encap_hdr_csum:1; __u8 csum_valid:1; __u8 ndisc_nodetype:2; __u8 redirected:1; __u8 decrypted:1; __u8 slow_gro:1; __u8 csum_not_inet:1; __u16 tc_index; u16 alloc_cpu; union { __wsum csum; struct { __u16 csum_start; __u16 csum_offset; }; }; __u32 priority; int skb_iif; __u32 hash; union { u32 vlan_all; struct { __be16 vlan_proto; __u16 vlan_tci; }; }; union { unsigned int napi_id; unsigned int sender_cpu; }; __u32 secmark; union { __u32 mark; __u32 reserved_tailroom; }; union { __be16 inner_protocol; __u8 inner_ipproto; }; __u16 inner_transport_header; __u16 inner_network_header; __u16 inner_mac_header; __be16 protocol; __u16 transport_header; __u16 network_header; __u16 mac_header; } ; struct { __u8 __pkt_type_offset[0]; __u8 pkt_type:3; __u8 ignore_df:1; __u8 dst_pending_confirm:1; __u8 ip_summed:2; __u8 ooo_okay:1; __u8 __mono_tc_offset[0]; __u8 tstamp_type:2; __u8 tc_at_ingress:1; __u8 tc_skip_classify:1; __u8 remcsum_offload:1; __u8 csum_complete_sw:1; __u8 csum_level:2; __u8 inner_protocol_type:1; __u8 l4_hash:1; __u8 sw_hash:1; __u8 wifi_acked_valid:1; __u8 wifi_acked:1; __u8 no_fcs:1; __u8 encapsulation:1; __u8 encap_hdr_csum:1; __u8 csum_valid:1; __u8 ndisc_nodetype:2; __u8 redirected:1; __u8 decrypted:1; __u8 slow_gro:1; __u8 csum_not_inet:1; __u16 tc_index; u16 alloc_cpu; union { __wsum csum; struct { __u16 csum_start; __u16 csum_offset; }; }; __u32 priority; int skb_iif; __u32 hash; union { u32 vlan_all; struct { __be16 vlan_proto; __u16 vlan_tci; }; }; union { unsigned int napi_id; unsigned int sender_cpu; }; __u32 secmark; union { __u32 mark; __u32 reserved_tailroom; }; union { __be16 inner_protocol; __u8 inner_ipproto; }; __u16 inner_transport_header; __u16 inner_network_header; __u16 inner_mac_header; __be16 protocol; __u16 transport_header; __u16 network_header; __u16 mac_header; } headers; } ;
# 1071 "../include/linux/skbuff.h"
 sk_buff_data_t tail;
 sk_buff_data_t end;
 unsigned char *head,
    *data;
 unsigned int truesize;
 refcount_t users;



 struct skb_ext *extensions;

};
# 1118 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_pfmemalloc(const struct sk_buff *skb)
{
 return __builtin_expect(!!(skb->pfmemalloc), 0);
}
# 1136 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dst_entry *skb_dst(const struct sk_buff *skb)
{



 ({ int __ret_warn_on = !!((skb->_skb_refdst & 1UL) && !rcu_read_lock_held() && !rcu_read_lock_bh_held()); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 1143, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });


 return (struct dst_entry *)(skb->_skb_refdst & ~(1UL));
}
# 1155 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_set(struct sk_buff *skb, struct dst_entry *dst)
{
 skb->slow_gro |= !!dst;
 skb->_skb_refdst = (unsigned long)dst;
}
# 1171 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_set_noref(struct sk_buff *skb, struct dst_entry *dst)
{
 ({ int __ret_warn_on = !!(!rcu_read_lock_held() && !rcu_read_lock_bh_held()); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 1173, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 skb->slow_gro |= !!dst;
 skb->_skb_refdst = (unsigned long)dst | 1UL;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_dst_is_noref(const struct sk_buff *skb)
{
 return (skb->_skb_refdst & 1UL) && skb_dst(skb);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_pkt_type_ok(u32 ptype)
{
 return ptype <= 3;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_napi_id(const struct sk_buff *skb)
{

 return skb->napi_id;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_wifi_acked_valid(const struct sk_buff *skb)
{

 return skb->wifi_acked_valid;



}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_unref(struct sk_buff *skb)
{
 if (__builtin_expect(!!(!skb), 0))
  return false;
 if (__builtin_expect(!!(refcount_read(&skb->users) == 1), 1))
  __asm__ __volatile__("": : :"memory");
 else if (__builtin_expect(!!(!refcount_dec_and_test(&skb->users)), 1))
  return false;

 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_data_unref(const struct sk_buff *skb,
      struct skb_shared_info *shinfo)
{
 int bias;

 if (!skb->cloned)
  return true;

 bias = skb->nohdr ? (1 << 16) + 1 : 1;

 if (atomic_read(&shinfo->dataref) == bias)
  __asm__ __volatile__("": : :"memory");
 else if (atomic_sub_return(bias, &shinfo->dataref))
  return false;

 return true;
}

void __attribute__((__noinline__)) sk_skb_reason_drop(struct sock *sk, struct sk_buff *skb,
          enum skb_drop_reason reason);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
{
 sk_skb_reason_drop(((void *)0), skb, reason);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kfree_skb(struct sk_buff *skb)
{
 kfree_skb_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
}

void skb_release_head_state(struct sk_buff *skb);
void kfree_skb_list_reason(struct sk_buff *segs,
      enum skb_drop_reason reason);
void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt);
void skb_tx_error(struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kfree_skb_list(struct sk_buff *segs)
{
 kfree_skb_list_reason(segs, SKB_DROP_REASON_NOT_SPECIFIED);
}


void consume_skb(struct sk_buff *skb);







void __consume_stateless_skb(struct sk_buff *skb);
void __kfree_skb(struct sk_buff *skb);

void kfree_skb_partial(struct sk_buff *skb, bool head_stolen);
bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
        bool *fragstolen, int *delta_truesize);

struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags,
       int node);
struct sk_buff *__build_skb(void *data, unsigned int frag_size);
struct sk_buff *build_skb(void *data, unsigned int frag_size);
struct sk_buff *build_skb_around(struct sk_buff *skb,
     void *data, unsigned int frag_size);
void skb_attempt_defer_free(struct sk_buff *skb);

struct sk_buff *napi_build_skb(void *data, unsigned int frag_size);
struct sk_buff *slab_build_skb(void *data);
# 1317 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *alloc_skb(unsigned int size,
     gfp_t priority)
{
 return __alloc_skb(size, priority, 0, (-1));
}

struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
         unsigned long data_len,
         int max_page_order,
         int *errcode,
         gfp_t gfp_mask);
struct sk_buff *alloc_skb_for_msg(struct sk_buff *first);


struct sk_buff_fclones {
 struct sk_buff skb1;

 struct sk_buff skb2;

 refcount_t fclone_ref;
};
# 1348 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_fclone_busy(const struct sock *sk,
       const struct sk_buff *skb)
{
 const struct sk_buff_fclones *fclones;

 fclones = ({ void *__mptr = (void *)(skb); _Static_assert(__builtin_types_compatible_p(typeof(*(skb)), typeof(((struct sk_buff_fclones *)0)->skb1)) || __builtin_types_compatible_p(typeof(*(skb)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sk_buff_fclones *)(__mptr - __builtin_offsetof(struct sk_buff_fclones, skb1))); });

 return skb->fclone == SKB_FCLONE_ORIG &&
        refcount_read(&fclones->fclone_ref) > 1 &&
        ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_275(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(fclones->skb2.sk) == sizeof(char) || sizeof(fclones->skb2.sk) == sizeof(short) || sizeof(fclones->skb2.sk) == sizeof(int) || sizeof(fclones->skb2.sk) == sizeof(long)) || sizeof(fclones->skb2.sk) == sizeof(long long))) __compiletime_assert_275(); } while (0); (*(const volatile typeof( _Generic((fclones->skb2.sk), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (fclones->skb2.sk))) *)&(fclones->skb2.sk)); }) == sk;
}
# 1367 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *alloc_skb_fclone(unsigned int size,
            gfp_t priority)
{
 return __alloc_skb(size, priority, 0x01, (-1));
}

struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src);
void skb_headers_offset_update(struct sk_buff *skb, int off);
int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask);
struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t priority);
void skb_copy_header(struct sk_buff *new, const struct sk_buff *old);
struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t priority);
struct sk_buff *__pskb_copy_fclone(struct sk_buff *skb, int headroom,
       gfp_t gfp_mask, bool fclone);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__pskb_copy(struct sk_buff *skb, int headroom,
       gfp_t gfp_mask)
{
 return __pskb_copy_fclone(skb, headroom, gfp_mask, false);
}

int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, gfp_t gfp_mask);
struct sk_buff *skb_realloc_headroom(struct sk_buff *skb,
         unsigned int headroom);
struct sk_buff *skb_expand_head(struct sk_buff *skb, unsigned int headroom);
struct sk_buff *skb_copy_expand(const struct sk_buff *skb, int newheadroom,
    int newtailroom, gfp_t priority);
int __attribute__((__warn_unused_result__)) skb_to_sgvec_nomark(struct sk_buff *skb, struct scatterlist *sg,
         int offset, int len);
int __attribute__((__warn_unused_result__)) skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg,
         int offset, int len);
int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer);
int __skb_pad(struct sk_buff *skb, int pad, bool free_on_error);
# 1411 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_pad(struct sk_buff *skb, int pad)
{
 return __skb_pad(skb, pad, true);
}


int skb_append_pagefrags(struct sk_buff *skb, struct page *page,
    int offset, size_t size, size_t max_frags);

struct skb_seq_state {
 __u32 lower_offset;
 __u32 upper_offset;
 __u32 frag_idx;
 __u32 stepped_offset;
 struct sk_buff *root_skb;
 struct sk_buff *cur_skb;
 __u8 *frag_data;
 __u32 frag_off;
};

void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
     unsigned int to, struct skb_seq_state *st);
unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
     struct skb_seq_state *st);
void skb_abort_seq_read(struct skb_seq_state *st);

unsigned int skb_find_text(struct sk_buff *skb, unsigned int from,
      unsigned int to, struct ts_config *config);
# 1466 "../include/linux/skbuff.h"
enum pkt_hash_types {
 PKT_HASH_TYPE_NONE,
 PKT_HASH_TYPE_L2,
 PKT_HASH_TYPE_L3,
 PKT_HASH_TYPE_L4,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_clear_hash(struct sk_buff *skb)
{
 skb->hash = 0;
 skb->sw_hash = 0;
 skb->l4_hash = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_clear_hash_if_not_l4(struct sk_buff *skb)
{
 if (!skb->l4_hash)
  skb_clear_hash(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__skb_set_hash(struct sk_buff *skb, __u32 hash, bool is_sw, bool is_l4)
{
 skb->l4_hash = is_l4;
 skb->sw_hash = is_sw;
 skb->hash = hash;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
skb_set_hash(struct sk_buff *skb, __u32 hash, enum pkt_hash_types type)
{

 __skb_set_hash(skb, hash, false, type == PKT_HASH_TYPE_L4);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__skb_set_sw_hash(struct sk_buff *skb, __u32 hash, bool is_l4)
{
 __skb_set_hash(skb, hash, true, is_l4);
}

u32 __skb_get_hash_symmetric_net(const struct net *net, const struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __skb_get_hash_symmetric(const struct sk_buff *skb)
{
 return __skb_get_hash_symmetric_net(((void *)0), skb);
}

void __skb_get_hash_net(const struct net *net, struct sk_buff *skb);
u32 skb_get_poff(const struct sk_buff *skb);
u32 __skb_get_poff(const struct sk_buff *skb, const void *data,
     const struct flow_keys_basic *keys, int hlen);
__be32 __skb_flow_get_ports(const struct sk_buff *skb, int thoff, u8 ip_proto,
       const void *data, int hlen_proto);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 skb_flow_get_ports(const struct sk_buff *skb,
     int thoff, u8 ip_proto)
{
 return __skb_flow_get_ports(skb, thoff, ip_proto, ((void *)0), 0);
}

void skb_flow_dissector_init(struct flow_dissector *flow_dissector,
        const struct flow_dissector_key *key,
        unsigned int key_count);

struct bpf_flow_dissector;
u32 bpf_flow_dissect(struct bpf_prog *prog, struct bpf_flow_dissector *ctx,
       __be16 proto, int nhoff, int hlen, unsigned int flags);

bool __skb_flow_dissect(const struct net *net,
   const struct sk_buff *skb,
   struct flow_dissector *flow_dissector,
   void *target_container, const void *data,
   __be16 proto, int nhoff, int hlen, unsigned int flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_flow_dissect(const struct sk_buff *skb,
        struct flow_dissector *flow_dissector,
        void *target_container, unsigned int flags)
{
 return __skb_flow_dissect(((void *)0), skb, flow_dissector,
      target_container, ((void *)0), 0, 0, 0, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_flow_dissect_flow_keys(const struct sk_buff *skb,
           struct flow_keys *flow,
           unsigned int flags)
{
 memset(flow, 0, sizeof(*flow));
 return __skb_flow_dissect(((void *)0), skb, &flow_keys_dissector,
      flow, ((void *)0), 0, 0, 0, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
skb_flow_dissect_flow_keys_basic(const struct net *net,
     const struct sk_buff *skb,
     struct flow_keys_basic *flow,
     const void *data, __be16 proto,
     int nhoff, int hlen, unsigned int flags)
{
 memset(flow, 0, sizeof(*flow));
 return __skb_flow_dissect(net, skb, &flow_keys_basic_dissector, flow,
      data, proto, nhoff, hlen, flags);
}

void skb_flow_dissect_meta(const struct sk_buff *skb,
      struct flow_dissector *flow_dissector,
      void *target_container);





void
skb_flow_dissect_ct(const struct sk_buff *skb,
      struct flow_dissector *flow_dissector,
      void *target_container,
      u16 *ctinfo_map, size_t mapsize,
      bool post_ct, u16 zone);
void
skb_flow_dissect_tunnel_info(const struct sk_buff *skb,
        struct flow_dissector *flow_dissector,
        void *target_container);

void skb_flow_dissect_hash(const struct sk_buff *skb,
      struct flow_dissector *flow_dissector,
      void *target_container);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_get_hash_net(const struct net *net, struct sk_buff *skb)
{
 if (!skb->l4_hash && !skb->sw_hash)
  __skb_get_hash_net(net, skb);

 return skb->hash;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_get_hash(struct sk_buff *skb)
{
 if (!skb->l4_hash && !skb->sw_hash)
  __skb_get_hash_net(((void *)0), skb);

 return skb->hash;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_get_hash_flowi6(struct sk_buff *skb, const struct flowi6 *fl6)
{
 if (!skb->l4_hash && !skb->sw_hash) {
  struct flow_keys keys;
  __u32 hash = __get_hash_from_flowi6(fl6, &keys);

  __skb_set_sw_hash(skb, hash, flow_keys_have_l4(&keys));
 }

 return skb->hash;
}

__u32 skb_get_hash_perturb(const struct sk_buff *skb,
      const siphash_key_t *perturb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_get_hash_raw(const struct sk_buff *skb)
{
 return skb->hash;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_hash(struct sk_buff *to, const struct sk_buff *from)
{
 to->hash = from->hash;
 to->sw_hash = from->sw_hash;
 to->l4_hash = from->l4_hash;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_cmp_decrypted(const struct sk_buff *skb1,
        const struct sk_buff *skb2)
{

 return skb2->decrypted - skb1->decrypted;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_decrypted(const struct sk_buff *skb)
{

 return skb->decrypted;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_decrypted(struct sk_buff *to,
          const struct sk_buff *from)
{

 to->decrypted = from->decrypted;

}
# 1679 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
 return skb->end;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_end_offset(const struct sk_buff *skb)
{
 return skb->end - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
{
 skb->end = skb->head + offset;
}


extern const struct ubuf_info_ops msg_zerocopy_ubuf_ops;

struct ubuf_info *msg_zerocopy_realloc(struct sock *sk, size_t size,
           struct ubuf_info *uarg);

void msg_zerocopy_put_abort(struct ubuf_info *uarg, bool have_uref);

int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
       struct sk_buff *skb, struct iov_iter *from,
       size_t length);

int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
    struct iov_iter *from, size_t length);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_zerocopy_iter_dgram(struct sk_buff *skb,
       struct msghdr *msg, int len)
{
 return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len);
}

int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
        struct msghdr *msg, int len,
        struct ubuf_info *uarg);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct skb_shared_hwtstamps *skb_hwtstamps(struct sk_buff *skb)
{
 return &((struct skb_shared_info *)(skb_end_pointer(skb)))->hwtstamps;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ubuf_info *skb_zcopy(struct sk_buff *skb)
{
 bool is_zcopy = skb && ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags & SKBFL_ZEROCOPY_ENABLE;

 return is_zcopy ? ((struct ubuf_info *)(((struct skb_shared_info *)(skb_end_pointer(skb)))->destructor_arg)) : ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_zcopy_pure(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags & SKBFL_PURE_ZEROCOPY;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_zcopy_managed(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags & SKBFL_MANAGED_FRAG_REFS;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_pure_zcopy_same(const struct sk_buff *skb1,
           const struct sk_buff *skb2)
{
 return skb_zcopy_pure(skb1) == skb_zcopy_pure(skb2);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void net_zcopy_get(struct ubuf_info *uarg)
{
 refcount_inc(&uarg->refcnt);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_zcopy_init(struct sk_buff *skb, struct ubuf_info *uarg)
{
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->destructor_arg = uarg;
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags |= uarg->flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_zcopy_set(struct sk_buff *skb, struct ubuf_info *uarg,
     bool *have_ref)
{
 if (skb && uarg && !skb_zcopy(skb)) {
  if (__builtin_expect(!!(have_ref && *have_ref), 0))
   *have_ref = false;
  else
   net_zcopy_get(uarg);
  skb_zcopy_init(skb, uarg);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_zcopy_set_nouarg(struct sk_buff *skb, void *val)
{
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->destructor_arg = (void *)((uintptr_t) val | 0x1UL);
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags |= (SKBFL_ZEROCOPY_ENABLE | SKBFL_SHARED_FRAG);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_zcopy_is_nouarg(struct sk_buff *skb)
{
 return (uintptr_t) ((struct skb_shared_info *)(skb_end_pointer(skb)))->destructor_arg & 0x1UL;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_zcopy_get_nouarg(struct sk_buff *skb)
{
 return (void *)((uintptr_t) ((struct skb_shared_info *)(skb_end_pointer(skb)))->destructor_arg & ~0x1UL);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void net_zcopy_put(struct ubuf_info *uarg)
{
 if (uarg)
  uarg->ops->complete(((void *)0), uarg, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void net_zcopy_put_abort(struct ubuf_info *uarg, bool have_uref)
{
 if (uarg) {
  if (uarg->ops == &msg_zerocopy_ubuf_ops)
   msg_zerocopy_put_abort(uarg, have_uref);
  else if (have_uref)
   net_zcopy_put(uarg);
 }
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy_success)
{
 struct ubuf_info *uarg = skb_zcopy(skb);

 if (uarg) {
  if (!skb_zcopy_is_nouarg(skb))
   uarg->ops->complete(skb, uarg, zerocopy_success);

  ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags &= ~((SKBFL_ZEROCOPY_ENABLE | SKBFL_SHARED_FRAG) | SKBFL_PURE_ZEROCOPY | SKBFL_DONT_ORPHAN | SKBFL_MANAGED_FRAG_REFS);
 }
}

void __skb_zcopy_downgrade_managed(struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_zcopy_downgrade_managed(struct sk_buff *skb)
{
 if (__builtin_expect(!!(skb_zcopy_managed(skb)), 0))
  __skb_zcopy_downgrade_managed(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_mark_not_on_list(struct sk_buff *skb)
{
 skb->next = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_poison_list(struct sk_buff *skb)
{

 skb->next = ((void *)(0x800 + 0));

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_list_del_init(struct sk_buff *skb)
{
 __list_del_entry(&skb->list);
 skb_mark_not_on_list(skb);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_queue_empty(const struct sk_buff_head *list)
{
 return list->next == (const struct sk_buff *) list;
}
# 1867 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_queue_empty_lockless(const struct sk_buff_head *list)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_276(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->next) == sizeof(char) || sizeof(list->next) == sizeof(short) || sizeof(list->next) == sizeof(int) || sizeof(list->next) == sizeof(long)) || sizeof(list->next) == sizeof(long long))) __compiletime_assert_276(); } while (0); (*(const volatile typeof( _Generic((list->next), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (list->next))) *)&(list->next)); }) == (const struct sk_buff *) list;
}
# 1880 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_queue_is_last(const struct sk_buff_head *list,
         const struct sk_buff *skb)
{
 return skb->next == (const struct sk_buff *) list;
}
# 1893 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_queue_is_first(const struct sk_buff_head *list,
          const struct sk_buff *skb)
{
 return skb->prev == (const struct sk_buff *) list;
}
# 1907 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_queue_next(const struct sk_buff_head *list,
          const struct sk_buff *skb)
{



 do { if (__builtin_expect(!!(skb_queue_is_last(list, skb)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 1913, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 return skb->next;
}
# 1925 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_queue_prev(const struct sk_buff_head *list,
          const struct sk_buff *skb)
{



 do { if (__builtin_expect(!!(skb_queue_is_first(list, skb)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 1931, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 return skb->prev;
}
# 1942 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_get(struct sk_buff *skb)
{
 refcount_inc(&skb->users);
 return skb;
}
# 1960 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_cloned(const struct sk_buff *skb)
{
 return skb->cloned &&
        (atomic_read(&((struct skb_shared_info *)(skb_end_pointer(skb)))->dataref) & ((1 << 16) - 1)) != 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_unclone(struct sk_buff *skb, gfp_t pri)
{
 do { if (gfpflags_allow_blocking(pri)) do { do { } while (0); } while (0); } while (0);

 if (skb_cloned(skb))
  return pskb_expand_head(skb, 0, 0, pri);

 return 0;
}







int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
{
 do { if (gfpflags_allow_blocking(pri)) do { do { } while (0); } while (0); } while (0);

 if (skb_cloned(skb))
  return __skb_unclone_keeptruesize(skb, pri);
 return 0;
}
# 1999 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_header_cloned(const struct sk_buff *skb)
{
 int dataref;

 if (!skb->cloned)
  return 0;

 dataref = atomic_read(&((struct skb_shared_info *)(skb_end_pointer(skb)))->dataref);
 dataref = (dataref & ((1 << 16) - 1)) - (dataref >> 16);
 return dataref != 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_header_unclone(struct sk_buff *skb, gfp_t pri)
{
 do { if (gfpflags_allow_blocking(pri)) do { do { } while (0); } while (0); } while (0);

 if (skb_header_cloned(skb))
  return pskb_expand_head(skb, 0, 0, pri);

 return 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_header_release(struct sk_buff *skb)
{
 skb->nohdr = 1;
 atomic_set(&((struct skb_shared_info *)(skb_end_pointer(skb)))->dataref, 1 + (1 << 16));
}
# 2041 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_shared(const struct sk_buff *skb)
{
 return refcount_read(&skb->users) != 1;
}
# 2059 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_share_check(struct sk_buff *skb, gfp_t pri)
{
 do { if (gfpflags_allow_blocking(pri)) do { do { } while (0); } while (0); } while (0);
 if (skb_shared(skb)) {
  struct sk_buff *nskb = skb_clone(skb, pri);

  if (__builtin_expect(!!(nskb), 1))
   consume_skb(skb);
  else
   kfree_skb(skb);
  skb = nskb;
 }
 return skb;
}
# 2094 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_unshare(struct sk_buff *skb,
       gfp_t pri)
{
 do { if (gfpflags_allow_blocking(pri)) do { do { } while (0); } while (0); } while (0);
 if (skb_cloned(skb)) {
  struct sk_buff *nskb = skb_copy(skb, pri);


  if (__builtin_expect(!!(nskb), 1))
   consume_skb(skb);
  else
   kfree_skb(skb);
  skb = nskb;
 }
 return skb;
}
# 2124 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_peek(const struct sk_buff_head *list_)
{
 struct sk_buff *skb = list_->next;

 if (skb == (struct sk_buff *)list_)
  skb = ((void *)0);
 return skb;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__skb_peek(const struct sk_buff_head *list_)
{
 return list_->next;
}
# 2153 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_peek_next(struct sk_buff *skb,
  const struct sk_buff_head *list_)
{
 struct sk_buff *next = skb->next;

 if (next == (struct sk_buff *)list_)
  next = ((void *)0);
 return next;
}
# 2176 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_peek_tail(const struct sk_buff_head *list_)
{
 struct sk_buff *skb = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_277(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list_->prev) == sizeof(char) || sizeof(list_->prev) == sizeof(short) || sizeof(list_->prev) == sizeof(int) || sizeof(list_->prev) == sizeof(long)) || sizeof(list_->prev) == sizeof(long long))) __compiletime_assert_277(); } while (0); (*(const volatile typeof( _Generic((list_->prev), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (list_->prev))) *)&(list_->prev)); });

 if (skb == (struct sk_buff *)list_)
  skb = ((void *)0);
 return skb;

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_queue_len(const struct sk_buff_head *list_)
{
 return list_->qlen;
}
# 2204 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 skb_queue_len_lockless(const struct sk_buff_head *list_)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_278(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list_->qlen) == sizeof(char) || sizeof(list_->qlen) == sizeof(short) || sizeof(list_->qlen) == sizeof(int) || sizeof(list_->qlen) == sizeof(long)) || sizeof(list_->qlen) == sizeof(long long))) __compiletime_assert_278(); } while (0); (*(const volatile typeof( _Generic((list_->qlen), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (list_->qlen))) *)&(list_->qlen)); });
}
# 2219 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_head_init(struct sk_buff_head *list)
{
 list->prev = list->next = (struct sk_buff *)list;
 list->qlen = 0;
}
# 2233 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_head_init(struct sk_buff_head *list)
{
 do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&list->lock), "&list->lock", &__key, LD_WAIT_CONFIG); } while (0);
 __skb_queue_head_init(list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_head_init_class(struct sk_buff_head *list,
  struct lock_class_key *class)
{
 skb_queue_head_init(list);
 lockdep_init_map_type(&(&list->lock)->dep_map, "class", class, 0, (&list->lock)->dep_map.wait_type_inner, (&list->lock)->dep_map.wait_type_outer, (&list->lock)->dep_map.lock_type);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_insert(struct sk_buff *newsk,
    struct sk_buff *prev, struct sk_buff *next,
    struct sk_buff_head *list)
{



 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_279(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(newsk->next) == sizeof(char) || sizeof(newsk->next) == sizeof(short) || sizeof(newsk->next) == sizeof(int) || sizeof(newsk->next) == sizeof(long)) || sizeof(newsk->next) == sizeof(long long))) __compiletime_assert_279(); } while (0); do { *(volatile typeof(newsk->next) *)&(newsk->next) = (next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_280(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(newsk->prev) == sizeof(char) || sizeof(newsk->prev) == sizeof(short) || sizeof(newsk->prev) == sizeof(int) || sizeof(newsk->prev) == sizeof(long)) || sizeof(newsk->prev) == sizeof(long long))) __compiletime_assert_280(); } while (0); do { *(volatile typeof(newsk->prev) *)&(newsk->prev) = (prev); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_281(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((struct sk_buff_list *)next)->prev) == sizeof(char) || sizeof(((struct sk_buff_list *)next)->prev) == sizeof(short) || sizeof(((struct sk_buff_list *)next)->prev) == sizeof(int) || sizeof(((struct sk_buff_list *)next)->prev) == sizeof(long)) || sizeof(((struct sk_buff_list *)next)->prev) == sizeof(long long))) __compiletime_assert_281(); } while (0); do { *(volatile typeof(((struct sk_buff_list *)next)->prev) *)&(((struct sk_buff_list *)next)->prev) = (newsk); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_282(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((struct sk_buff_list *)prev)->next) == sizeof(char) || sizeof(((struct sk_buff_list *)prev)->next) == sizeof(short) || sizeof(((struct sk_buff_list *)prev)->next) == sizeof(int) || sizeof(((struct sk_buff_list *)prev)->next) == sizeof(long)) || sizeof(((struct sk_buff_list *)prev)->next) == sizeof(long long))) __compiletime_assert_282(); } while (0); do { *(volatile typeof(((struct sk_buff_list *)prev)->next) *)&(((struct sk_buff_list *)prev)->next) = (newsk); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_283(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->qlen) == sizeof(char) || sizeof(list->qlen) == sizeof(short) || sizeof(list->qlen) == sizeof(int) || sizeof(list->qlen) == sizeof(long)) || sizeof(list->qlen) == sizeof(long long))) __compiletime_assert_283(); } while (0); do { *(volatile typeof(list->qlen) *)&(list->qlen) = (list->qlen + 1); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_splice(const struct sk_buff_head *list,
          struct sk_buff *prev,
          struct sk_buff *next)
{
 struct sk_buff *first = list->next;
 struct sk_buff *last = list->prev;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_284(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(first->prev) == sizeof(char) || sizeof(first->prev) == sizeof(short) || sizeof(first->prev) == sizeof(int) || sizeof(first->prev) == sizeof(long)) || sizeof(first->prev) == sizeof(long long))) __compiletime_assert_284(); } while (0); do { *(volatile typeof(first->prev) *)&(first->prev) = (prev); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_285(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(prev->next) == sizeof(char) || sizeof(prev->next) == sizeof(short) || sizeof(prev->next) == sizeof(int) || sizeof(prev->next) == sizeof(long)) || sizeof(prev->next) == sizeof(long long))) __compiletime_assert_285(); } while (0); do { *(volatile typeof(prev->next) *)&(prev->next) = (first); } while (0); } while (0);

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_286(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(last->next) == sizeof(char) || sizeof(last->next) == sizeof(short) || sizeof(last->next) == sizeof(int) || sizeof(last->next) == sizeof(long)) || sizeof(last->next) == sizeof(long long))) __compiletime_assert_286(); } while (0); do { *(volatile typeof(last->next) *)&(last->next) = (next); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_287(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->prev) == sizeof(char) || sizeof(next->prev) == sizeof(short) || sizeof(next->prev) == sizeof(int) || sizeof(next->prev) == sizeof(long)) || sizeof(next->prev) == sizeof(long long))) __compiletime_assert_287(); } while (0); do { *(volatile typeof(next->prev) *)&(next->prev) = (last); } while (0); } while (0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_splice(const struct sk_buff_head *list,
        struct sk_buff_head *head)
{
 if (!skb_queue_empty(list)) {
  __skb_queue_splice(list, (struct sk_buff *) head, head->next);
  head->qlen += list->qlen;
 }
}
# 2301 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_splice_init(struct sk_buff_head *list,
      struct sk_buff_head *head)
{
 if (!skb_queue_empty(list)) {
  __skb_queue_splice(list, (struct sk_buff *) head, head->next);
  head->qlen += list->qlen;
  __skb_queue_head_init(list);
 }
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_splice_tail(const struct sk_buff_head *list,
      struct sk_buff_head *head)
{
 if (!skb_queue_empty(list)) {
  __skb_queue_splice(list, head->prev, (struct sk_buff *) head);
  head->qlen += list->qlen;
 }
}
# 2333 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_splice_tail_init(struct sk_buff_head *list,
           struct sk_buff_head *head)
{
 if (!skb_queue_empty(list)) {
  __skb_queue_splice(list, head->prev, (struct sk_buff *) head);
  head->qlen += list->qlen;
  __skb_queue_head_init(list);
 }
}
# 2354 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_after(struct sk_buff_head *list,
         struct sk_buff *prev,
         struct sk_buff *newsk)
{
 __skb_insert(newsk, prev, ((struct sk_buff_list *)prev)->next, list);
}

void skb_append(struct sk_buff *old, struct sk_buff *newsk,
  struct sk_buff_head *list);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_before(struct sk_buff_head *list,
          struct sk_buff *next,
          struct sk_buff *newsk)
{
 __skb_insert(newsk, ((struct sk_buff_list *)next)->prev, next, list);
}
# 2381 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_head(struct sk_buff_head *list,
        struct sk_buff *newsk)
{
 __skb_queue_after(list, (struct sk_buff *)list, newsk);
}
void skb_queue_head(struct sk_buff_head *list, struct sk_buff *newsk);
# 2398 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_tail(struct sk_buff_head *list,
       struct sk_buff *newsk)
{
 __skb_queue_before(list, (struct sk_buff *)list, newsk);
}
void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk);





void skb_unlink(struct sk_buff *skb, struct sk_buff_head *list);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
{
 struct sk_buff *next, *prev;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_288(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(list->qlen) == sizeof(char) || sizeof(list->qlen) == sizeof(short) || sizeof(list->qlen) == sizeof(int) || sizeof(list->qlen) == sizeof(long)) || sizeof(list->qlen) == sizeof(long long))) __compiletime_assert_288(); } while (0); do { *(volatile typeof(list->qlen) *)&(list->qlen) = (list->qlen - 1); } while (0); } while (0);
 next = skb->next;
 prev = skb->prev;
 skb->next = skb->prev = ((void *)0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_289(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(next->prev) == sizeof(char) || sizeof(next->prev) == sizeof(short) || sizeof(next->prev) == sizeof(int) || sizeof(next->prev) == sizeof(long)) || sizeof(next->prev) == sizeof(long long))) __compiletime_assert_289(); } while (0); do { *(volatile typeof(next->prev) *)&(next->prev) = (prev); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_290(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(prev->next) == sizeof(char) || sizeof(prev->next) == sizeof(short) || sizeof(prev->next) == sizeof(int) || sizeof(prev->next) == sizeof(long)) || sizeof(prev->next) == sizeof(long long))) __compiletime_assert_290(); } while (0); do { *(volatile typeof(prev->next) *)&(prev->next) = (next); } while (0); } while (0);
}
# 2430 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
{
 struct sk_buff *skb = skb_peek(list);
 if (skb)
  __skb_unlink(skb, list);
 return skb;
}
struct sk_buff *skb_dequeue(struct sk_buff_head *list);
# 2447 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__skb_dequeue_tail(struct sk_buff_head *list)
{
 struct sk_buff *skb = skb_peek_tail(list);
 if (skb)
  __skb_unlink(skb, list);
 return skb;
}
struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_nonlinear(const struct sk_buff *skb)
{
 return skb->data_len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_headlen(const struct sk_buff *skb)
{
 return skb->len - skb->data_len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __skb_pagelen(const struct sk_buff *skb)
{
 unsigned int i, len = 0;

 for (i = ((struct skb_shared_info *)(skb_end_pointer(skb)))->nr_frags - 1; (int)i >= 0; i--)
  len += skb_frag_size(&((struct skb_shared_info *)(skb_end_pointer(skb)))->frags[i]);
 return len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_pagelen(const struct sk_buff *skb)
{
 return skb_headlen(skb) + __skb_pagelen(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_fill_netmem_desc(skb_frag_t *frag,
          netmem_ref netmem, int off,
          int size)
{
 frag->netmem = netmem;
 frag->offset = off;
 skb_frag_size_set(frag, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_fill_page_desc(skb_frag_t *frag,
        struct page *page,
        int off, int size)
{
 skb_frag_fill_netmem_desc(frag, page_to_netmem(page), off, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_fill_netmem_desc_noacc(struct skb_shared_info *shinfo,
      int i, netmem_ref netmem,
      int off, int size)
{
 skb_frag_t *frag = &shinfo->frags[i];

 skb_frag_fill_netmem_desc(frag, netmem, off, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
           int i, struct page *page,
           int off, int size)
{
 __skb_fill_netmem_desc_noacc(shinfo, i, page_to_netmem(page), off,
         size);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_len_add(struct sk_buff *skb, int delta)
{
 skb->len += delta;
 skb->data_len += delta;
 skb->truesize += delta;
}
# 2539 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_fill_netmem_desc(struct sk_buff *skb, int i,
       netmem_ref netmem, int off, int size)
{
 struct page *page = netmem_to_page(netmem);

 __skb_fill_netmem_desc_noacc(((struct skb_shared_info *)(skb_end_pointer(skb))), i, netmem, off, size);





 page = ((typeof(page))_compound_head(page));
 if (page_is_pfmemalloc(page))
  skb->pfmemalloc = true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_fill_page_desc(struct sk_buff *skb, int i,
     struct page *page, int off, int size)
{
 __skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_fill_netmem_desc(struct sk_buff *skb, int i,
     netmem_ref netmem, int off, int size)
{
 __skb_fill_netmem_desc(skb, i, netmem, off, size);
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->nr_frags = i + 1;
}
# 2582 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_fill_page_desc(struct sk_buff *skb, int i,
          struct page *page, int off, int size)
{
 skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size);
}
# 2599 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_fill_page_desc_noacc(struct sk_buff *skb, int i,
         struct page *page, int off,
         int size)
{
 struct skb_shared_info *shinfo = ((struct skb_shared_info *)(skb_end_pointer(skb)));

 __skb_fill_page_desc_noacc(shinfo, i, page, off, size);
 shinfo->nr_frags = i + 1;
}

void skb_add_rx_frag_netmem(struct sk_buff *skb, int i, netmem_ref netmem,
       int off, int size, unsigned int truesize);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_add_rx_frag(struct sk_buff *skb, int i,
       struct page *page, int off, int size,
       unsigned int truesize)
{
 skb_add_rx_frag_netmem(skb, i, page_to_netmem(page), off, size,
          truesize);
}

void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size,
     unsigned int truesize);
# 2643 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_tail_pointer(const struct sk_buff *skb)
{
 return skb->tail;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_tail_pointer(struct sk_buff *skb)
{
 skb->tail = skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_tail_pointer(struct sk_buff *skb, const int offset)
{
 skb->tail = skb->data + offset;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_assert_len(struct sk_buff *skb)
{

 if (({ bool __ret_do_once = !!(!skb->len); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 2663, 9, "%s\n", __func__); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  ({ bool __ret_do_once = !!(true); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) skb_dump("\001" "3", skb, false); __builtin_expect(!!(__ret_do_once), 0); });

}




void *pskb_put(struct sk_buff *skb, struct sk_buff *tail, int len);
void *skb_put(struct sk_buff *skb, unsigned int len);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *__skb_put(struct sk_buff *skb, unsigned int len)
{
 void *tmp = skb_tail_pointer(skb);
 do { if (__builtin_expect(!!(skb_is_nonlinear(skb)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 2676, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 skb->tail += len;
 skb->len += len;
 return tmp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *__skb_put_zero(struct sk_buff *skb, unsigned int len)
{
 void *tmp = __skb_put(skb, len);

 memset(tmp, 0, len);
 return tmp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *__skb_put_data(struct sk_buff *skb, const void *data,
       unsigned int len)
{
 void *tmp = __skb_put(skb, len);

 memcpy(tmp, data, len);
 return tmp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_put_u8(struct sk_buff *skb, u8 val)
{
 *(u8 *)__skb_put(skb, 1) = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_put_zero(struct sk_buff *skb, unsigned int len)
{
 void *tmp = skb_put(skb, len);

 memset(tmp, 0, len);

 return tmp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_put_data(struct sk_buff *skb, const void *data,
     unsigned int len)
{
 void *tmp = skb_put(skb, len);

 memcpy(tmp, data, len);

 return tmp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_put_u8(struct sk_buff *skb, u8 val)
{
 *(u8 *)skb_put(skb, 1) = val;
}

void *skb_push(struct sk_buff *skb, unsigned int len);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *__skb_push(struct sk_buff *skb, unsigned int len)
{
 (void)({ bool __ret_do_once = !!(len > ((int)(~0U >> 1))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 2731, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 skb->data -= len;
 skb->len += len;
 return skb->data;
}

void *skb_pull(struct sk_buff *skb, unsigned int len);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *__skb_pull(struct sk_buff *skb, unsigned int len)
{
 (void)({ bool __ret_do_once = !!(len > ((int)(~0U >> 1))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 2741, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 skb->len -= len;
 if (__builtin_expect(!!(skb->len < skb->data_len), 0)) {

  skb->len += len;
  ({ do {} while (0); _printk("\001" "3" "__skb_pull(len=%u)\n", len); });
  skb_dump("\001" "3", skb, false);

  do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 2750, __func__); }); do { } while (0); panic("BUG!"); } while (0);
 }
 return skb->data += len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_pull_inline(struct sk_buff *skb, unsigned int len)
{
 return __builtin_expect(!!(len > skb->len), 0) ? ((void *)0) : __skb_pull(skb, len);
}

void *skb_pull_data(struct sk_buff *skb, size_t len);

void *__pskb_pull_tail(struct sk_buff *skb, int delta);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum skb_drop_reason
pskb_may_pull_reason(struct sk_buff *skb, unsigned int len)
{
 (void)({ bool __ret_do_once = !!(len > ((int)(~0U >> 1))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 2767, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 if (__builtin_expect(!!(len <= skb_headlen(skb)), 1))
  return SKB_NOT_DROPPED_YET;

 if (__builtin_expect(!!(len > skb->len), 0))
  return SKB_DROP_REASON_PKT_TOO_SMALL;

 if (__builtin_expect(!!(!__pskb_pull_tail(skb, len - skb_headlen(skb))), 0))
  return SKB_DROP_REASON_NOMEM;

 return SKB_NOT_DROPPED_YET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
 return pskb_may_pull_reason(skb, len) == SKB_NOT_DROPPED_YET;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *pskb_pull(struct sk_buff *skb, unsigned int len)
{
 if (!pskb_may_pull(skb, len))
  return ((void *)0);

 skb->len -= len;
 return skb->data += len;
}

void skb_condense(struct sk_buff *skb);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_headroom(const struct sk_buff *skb)
{
 return skb->data - skb->head;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_tailroom(const struct sk_buff *skb)
{
 return skb_is_nonlinear(skb) ? 0 : skb->end - skb->tail;
}
# 2826 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_availroom(const struct sk_buff *skb)
{
 if (skb_is_nonlinear(skb))
  return 0;

 return skb->end - skb->tail - skb->reserved_tailroom;
}
# 2842 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reserve(struct sk_buff *skb, int len)
{
 skb->data += len;
 skb->tail += len;
}
# 2860 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_tailroom_reserve(struct sk_buff *skb, unsigned int mtu,
     unsigned int needed_tailroom)
{
 do { if (__builtin_expect(!!(skb_is_nonlinear(skb)), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 2863, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 if (mtu < skb_tailroom(skb) - needed_tailroom)

  skb->reserved_tailroom = skb_tailroom(skb) - mtu;
 else

  skb->reserved_tailroom = needed_tailroom;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_inner_protocol(struct sk_buff *skb,
       __be16 protocol)
{
 skb->inner_protocol = protocol;
 skb->inner_protocol_type = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_inner_ipproto(struct sk_buff *skb,
      __u8 ipproto)
{
 skb->inner_ipproto = ipproto;
 skb->inner_protocol_type = 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_inner_headers(struct sk_buff *skb)
{
 skb->inner_mac_header = skb->mac_header;
 skb->inner_network_header = skb->network_header;
 skb->inner_transport_header = skb->transport_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_mac_len(struct sk_buff *skb)
{
 skb->mac_len = skb->network_header - skb->mac_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_inner_transport_header(const struct sk_buff
       *skb)
{
 return skb->head + skb->inner_transport_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_inner_transport_offset(const struct sk_buff *skb)
{
 return skb_inner_transport_header(skb) - skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_inner_transport_header(struct sk_buff *skb)
{
 skb->inner_transport_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_inner_transport_header(struct sk_buff *skb,
         const int offset)
{
 skb_reset_inner_transport_header(skb);
 skb->inner_transport_header += offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_inner_network_header(const struct sk_buff *skb)
{
 return skb->head + skb->inner_network_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_inner_network_header(struct sk_buff *skb)
{
 skb->inner_network_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_inner_network_header(struct sk_buff *skb,
      const int offset)
{
 skb_reset_inner_network_header(skb);
 skb->inner_network_header += offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_inner_network_header_was_set(const struct sk_buff *skb)
{
 return skb->inner_network_header > 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_inner_mac_header(const struct sk_buff *skb)
{
 return skb->head + skb->inner_mac_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_inner_mac_header(struct sk_buff *skb)
{
 skb->inner_mac_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_inner_mac_header(struct sk_buff *skb,
         const int offset)
{
 skb_reset_inner_mac_header(skb);
 skb->inner_mac_header += offset;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_transport_header_was_set(const struct sk_buff *skb)
{
 return skb->transport_header != (typeof(skb->transport_header))~0U;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_transport_header(const struct sk_buff *skb)
{
 (void)({ bool __ret_do_once = !!(!skb_transport_header_was_set(skb)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 2969, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return skb->head + skb->transport_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_transport_header(struct sk_buff *skb)
{
 skb->transport_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_transport_header(struct sk_buff *skb,
         const int offset)
{
 skb_reset_transport_header(skb);
 skb->transport_header += offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_network_header(const struct sk_buff *skb)
{
 return skb->head + skb->network_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_network_header(struct sk_buff *skb)
{
 skb->network_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_network_header(struct sk_buff *skb, const int offset)
{
 skb_reset_network_header(skb);
 skb->network_header += offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_mac_header_was_set(const struct sk_buff *skb)
{
 return skb->mac_header != (typeof(skb->mac_header))~0U;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_mac_header(const struct sk_buff *skb)
{
 (void)({ bool __ret_do_once = !!(!skb_mac_header_was_set(skb)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3008, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return skb->head + skb->mac_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_mac_offset(const struct sk_buff *skb)
{
 return skb_mac_header(skb) - skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 skb_mac_header_len(const struct sk_buff *skb)
{
 (void)({ bool __ret_do_once = !!(!skb_mac_header_was_set(skb)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3019, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return skb->network_header - skb->mac_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_unset_mac_header(struct sk_buff *skb)
{
 skb->mac_header = (typeof(skb->mac_header))~0U;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_mac_header(struct sk_buff *skb)
{
 skb->mac_header = skb->data - skb->head;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_mac_header(struct sk_buff *skb, const int offset)
{
 skb_reset_mac_header(skb);
 skb->mac_header += offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_pop_mac_header(struct sk_buff *skb)
{
 skb->mac_header = skb->network_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_probe_transport_header(struct sk_buff *skb)
{
 struct flow_keys_basic keys;

 if (skb_transport_header_was_set(skb))
  return;

 if (skb_flow_dissect_flow_keys_basic(((void *)0), skb, &keys,
          ((void *)0), 0, 0, 0, 0))
  skb_set_transport_header(skb, keys.control.thoff);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_mac_header_rebuild(struct sk_buff *skb)
{
 if (skb_mac_header_was_set(skb)) {
  const unsigned char *old_mac = skb_mac_header(skb);

  skb_set_mac_header(skb, -skb->mac_len);
  memmove(skb_mac_header(skb), old_mac, skb->mac_len);
 }
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_mac_header_rebuild_full(struct sk_buff *skb, u32 full_mac_len)
{
 if (skb_mac_header_was_set(skb)) {
  const unsigned char *old_mac = skb_mac_header(skb);

  skb_set_mac_header(skb, -full_mac_len);
  memmove(skb_mac_header(skb), old_mac, full_mac_len);
  __skb_push(skb, full_mac_len - skb->mac_len);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_checksum_start_offset(const struct sk_buff *skb)
{
 return skb->csum_start - skb_headroom(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char *skb_checksum_start(const struct sk_buff *skb)
{
 return skb->head + skb->csum_start;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_transport_offset(const struct sk_buff *skb)
{
 return skb_transport_header(skb) - skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 skb_network_header_len(const struct sk_buff *skb)
{
 (void)({ bool __ret_do_once = !!(!skb_transport_header_was_set(skb)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3098, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return skb->transport_header - skb->network_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 skb_inner_network_header_len(const struct sk_buff *skb)
{
 return skb->inner_transport_header - skb->inner_network_header;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_network_offset(const struct sk_buff *skb)
{
 return skb_network_header(skb) - skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_inner_network_offset(const struct sk_buff *skb)
{
 return skb_inner_network_header(skb) - skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pskb_network_may_pull(struct sk_buff *skb, unsigned int len)
{
 return pskb_may_pull(skb, skb_network_offset(skb) + len);
}
# 3170 "../include/linux/skbuff.h"
int ___pskb_trim(struct sk_buff *skb, unsigned int len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_set_length(struct sk_buff *skb, unsigned int len)
{
 if (({ int __ret_warn_on = !!(skb_is_nonlinear(skb)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3174, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }))
  return;
 skb->len = len;
 skb_set_tail_pointer(skb, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_trim(struct sk_buff *skb, unsigned int len)
{
 __skb_set_length(skb, len);
}

void skb_trim(struct sk_buff *skb, unsigned int len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __pskb_trim(struct sk_buff *skb, unsigned int len)
{
 if (skb->data_len)
  return ___pskb_trim(skb, len);
 __skb_trim(skb, len);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pskb_trim(struct sk_buff *skb, unsigned int len)
{
 return (len < skb->len) ? __pskb_trim(skb, len) : 0;
}
# 3209 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void pskb_trim_unique(struct sk_buff *skb, unsigned int len)
{
 int err = pskb_trim(skb, len);
 do { if (__builtin_expect(!!(err), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 3212, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __skb_grow(struct sk_buff *skb, unsigned int len)
{
 unsigned int diff = len - skb->len;

 if (skb_tailroom(skb) < diff) {
  int ret = pskb_expand_head(skb, 0, diff - skb_tailroom(skb),
        ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
  if (ret)
   return ret;
 }
 __skb_set_length(skb, len);
 return 0;
}
# 3237 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_orphan(struct sk_buff *skb)
{
 if (skb->destructor) {
  skb->destructor(skb);
  skb->destructor = ((void *)0);
  skb->sk = ((void *)0);
 } else {
  do { if (__builtin_expect(!!(skb->sk), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/skbuff.h", 3244, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 }
}
# 3257 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_orphan_frags(struct sk_buff *skb, gfp_t gfp_mask)
{
 if (__builtin_expect(!!(!skb_zcopy(skb)), 1))
  return 0;
 if (((struct skb_shared_info *)(skb_end_pointer(skb)))->flags & SKBFL_DONT_ORPHAN)
  return 0;
 return skb_copy_ubufs(skb, gfp_mask);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_orphan_frags_rx(struct sk_buff *skb, gfp_t gfp_mask)
{
 if (__builtin_expect(!!(!skb_zcopy(skb)), 1))
  return 0;
 return skb_copy_ubufs(skb, gfp_mask);
}
# 3283 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_purge_reason(struct sk_buff_head *list,
         enum skb_drop_reason reason)
{
 struct sk_buff *skb;

 while ((skb = __skb_dequeue(list)) != ((void *)0))
  kfree_skb_reason(skb, reason);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_queue_purge(struct sk_buff_head *list)
{
 __skb_queue_purge_reason(list, SKB_DROP_REASON_QUEUE_PURGE);
}

void skb_queue_purge_reason(struct sk_buff_head *list,
       enum skb_drop_reason reason);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_queue_purge(struct sk_buff_head *list)
{
 skb_queue_purge_reason(list, SKB_DROP_REASON_QUEUE_PURGE);
}

unsigned int skb_rbtree_purge(struct rb_root *root);
void skb_errqueue_purge(struct sk_buff_head *list);

void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask);
# 3317 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *netdev_alloc_frag(unsigned int fragsz)
{
 return __netdev_alloc_frag_align(fragsz, ~0u);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *netdev_alloc_frag_align(unsigned int fragsz,
         unsigned int align)
{
 ({ bool __ret_do_once = !!(!is_power_of_2(align)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3325, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return __netdev_alloc_frag_align(fragsz, -align);
}

struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length,
       gfp_t gfp_mask);
# 3345 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *netdev_alloc_skb(struct net_device *dev,
            unsigned int length)
{
 return __netdev_alloc_skb(dev, length, ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__dev_alloc_skb(unsigned int length,
           gfp_t gfp_mask)
{
 return __netdev_alloc_skb(((void *)0), length, gfp_mask);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *dev_alloc_skb(unsigned int length)
{
 return netdev_alloc_skb(((void *)0), length);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *__netdev_alloc_skb_ip_align(struct net_device *dev,
  unsigned int length, gfp_t gfp)
{
 struct sk_buff *skb = __netdev_alloc_skb(dev, length + 2, gfp);

 if (2 && skb)
  skb_reserve(skb, 2);
 return skb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev,
  unsigned int length)
{
 return __netdev_alloc_skb_ip_align(dev, length, ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_free_frag(void *addr)
{
 page_frag_free(addr);
}

void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *napi_alloc_frag(unsigned int fragsz)
{
 return __napi_alloc_frag_align(fragsz, ~0u);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *napi_alloc_frag_align(unsigned int fragsz,
       unsigned int align)
{
 ({ bool __ret_do_once = !!(!is_power_of_2(align)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 3396, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return __napi_alloc_frag_align(fragsz, -align);
}

struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int length);
void napi_consume_skb(struct sk_buff *skb, int budget);

void napi_skb_free_stolen_head(struct sk_buff *skb);
void __napi_kfree_skb(struct sk_buff *skb, enum skb_drop_reason reason);
# 3415 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *__dev_alloc_pages_noprof(gfp_t gfp_mask,
          unsigned int order)
{
# 3426 "../include/linux/skbuff.h"
 gfp_mask |= (( gfp_t)((((1UL))) << (___GFP_COMP_BIT))) | (( gfp_t)((((1UL))) << (___GFP_MEMALLOC_BIT)));

 return alloc_pages_node_noprof((-1), gfp_mask, order);
}
# 3446 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *__dev_alloc_page_noprof(gfp_t gfp_mask)
{
 return __dev_alloc_pages_noprof(gfp_mask, 0);
}
# 3468 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_page_is_reusable(const struct page *page)
{
 return __builtin_expect(!!(page_to_nid(page) == numa_mem_id() && !page_is_pfmemalloc(page)), 1);

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_propagate_pfmemalloc(const struct page *page,
         struct sk_buff *skb)
{
 if (page_is_pfmemalloc(page))
  skb->pfmemalloc = true;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_frag_off(const skb_frag_t *frag)
{
 return frag->offset;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_off_add(skb_frag_t *frag, int delta)
{
 frag->offset += delta;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_off_set(skb_frag_t *frag, unsigned int offset)
{
 frag->offset = offset;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_off_copy(skb_frag_t *fragto,
         const skb_frag_t *fragfrom)
{
 fragto->offset = fragfrom->offset;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *skb_frag_page(const skb_frag_t *frag)
{
 return netmem_to_page(frag->netmem);
}

int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
      unsigned int headroom);
int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb,
    struct bpf_prog *prog);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_frag_address(const skb_frag_t *frag)
{
 return lowmem_page_address(skb_frag_page(frag)) + skb_frag_off(frag);
}
# 3560 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_frag_address_safe(const skb_frag_t *frag)
{
 void *ptr = lowmem_page_address(skb_frag_page(frag));
 if (__builtin_expect(!!(!ptr), 0))
  return ((void *)0);

 return ptr + skb_frag_off(frag);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_page_copy(skb_frag_t *fragto,
          const skb_frag_t *fragfrom)
{
 fragto->netmem = fragfrom->netmem;
}

bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio);
# 3593 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dma_addr_t skb_frag_dma_map(struct device *dev,
       const skb_frag_t *frag,
       size_t offset, size_t size,
       enum dma_data_direction dir)
{
 return dma_map_page_attrs(dev, skb_frag_page(frag), skb_frag_off(frag) + offset, size, dir, 0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *pskb_copy(struct sk_buff *skb,
     gfp_t gfp_mask)
{
 return __pskb_copy(skb, skb_headroom(skb), gfp_mask);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *pskb_copy_for_clone(struct sk_buff *skb,
        gfp_t gfp_mask)
{
 return __pskb_copy_fclone(skb, skb_headroom(skb), gfp_mask, true);
}
# 3624 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_clone_writable(const struct sk_buff *skb, unsigned int len)
{
 return !skb_header_cloned(skb) &&
        skb_headroom(skb) + len <= skb->hdr_len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_try_make_writable(struct sk_buff *skb,
     unsigned int write_len)
{
 return skb_cloned(skb) && !skb_clone_writable(skb, write_len) &&
        pskb_expand_head(skb, 0, 0, ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __skb_cow(struct sk_buff *skb, unsigned int headroom,
       int cloned)
{
 int delta = 0;

 if (headroom > skb_headroom(skb))
  delta = headroom - skb_headroom(skb);

 if (delta || cloned)
  return pskb_expand_head(skb, ((((delta)) + ((__typeof__((delta)))((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((32) - ((1 << (5)))) * 0l)) : (int *)8))), ((32) > ((1 << (5))) ? (32) : ((1 << (5)))), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(32))(-1)) < ( typeof(32))1)) * 0l)) : (int *)8))), (((typeof(32))(-1)) < ( typeof(32))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1)) * 0l)) : (int *)8))), (((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((32) + 0))(-1)) < ( typeof((32) + 0))1)) * 0l)) : (int *)8))), (((typeof((32) + 0))(-1)) < ( typeof((32) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(((1 << (5))) + 0))(-1)) < ( typeof(((1 << (5))) + 0))1)) * 0l)) : (int *)8))), (((typeof(((1 << (5))) + 0))(-1)) < ( typeof(((1 << (5))) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(32) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(32))(-1)) < ( typeof(32))1)) * 0l)) : (int *)8))), (((typeof(32))(-1)) < ( typeof(32))1), 0), 32, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((1 << (5))) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1)) * 0l)) : (int *)8))), (((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1), 0), (1 << (5)), -1) >= 0)), "max" "(" "32" ", " "(1 << (5))" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_291 = (32); __auto_type __UNIQUE_ID_y_292 = ((1 << (5))); ((__UNIQUE_ID_x_291) > (__UNIQUE_ID_y_292) ? (__UNIQUE_ID_x_291) : (__UNIQUE_ID_y_292)); }); })))) - 1)) & ~((__typeof__((delta)))((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((32) - ((1 << (5)))) * 0l)) : (int *)8))), ((32) > ((1 << (5))) ? (32) : ((1 << (5)))), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(32))(-1)) < ( typeof(32))1)) * 0l)) : (int *)8))), (((typeof(32))(-1)) < ( typeof(32))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1)) * 0l)) : (int *)8))), (((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((32) + 0))(-1)) < ( typeof((32) + 0))1)) * 0l)) : (int *)8))), (((typeof((32) + 0))(-1)) < ( typeof((32) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(((1 << (5))) + 0))(-1)) < ( typeof(((1 << (5))) + 0))1)) * 0l)) : (int *)8))), (((typeof(((1 << (5))) + 0))(-1)) < ( typeof(((1 << (5))) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(32) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(32))(-1)) < ( typeof(32))1)) * 0l)) : (int *)8))), (((typeof(32))(-1)) < ( typeof(32))1), 0), 32, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((1 << (5))) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1)) * 0l)) : (int *)8))), (((typeof((1 << (5))))(-1)) < ( typeof((1 << (5))))1), 0), (1 << (5)), -1) >= 0)), "max" "(" "32" ", " "(1 << (5))" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_291 = (32); __auto_type __UNIQUE_ID_y_292 = ((1 << (5))); ((__UNIQUE_ID_x_291) > (__UNIQUE_ID_y_292) ? (__UNIQUE_ID_x_291) : (__UNIQUE_ID_y_292)); }); })))) - 1)), 0,
     ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
 return 0;
}
# 3663 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_cow(struct sk_buff *skb, unsigned int headroom)
{
 return __skb_cow(skb, headroom, skb_cloned(skb));
}
# 3678 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_cow_head(struct sk_buff *skb, unsigned int headroom)
{
 return __skb_cow(skb, headroom, skb_header_cloned(skb));
}
# 3693 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_padto(struct sk_buff *skb, unsigned int len)
{
 unsigned int size = skb->len;
 if (__builtin_expect(!!(size >= len), 1))
  return 0;
 return skb_pad(skb, len - size);
}
# 3712 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) __skb_put_padto(struct sk_buff *skb,
            unsigned int len,
            bool free_on_error)
{
 unsigned int size = skb->len;

 if (__builtin_expect(!!(size < len), 0)) {
  len -= size;
  if (__skb_pad(skb, len, free_on_error))
   return -12;
  __skb_put(skb, len);
 }
 return 0;
}
# 3737 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__)) skb_put_padto(struct sk_buff *skb, unsigned int len)
{
 return __skb_put_padto(skb, len, true);
}

bool csum_and_copy_from_iter_full(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i)
 __attribute__((__warn_unused_result__));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_add_data(struct sk_buff *skb,
          struct iov_iter *from, int copy)
{
 const int off = skb->len;

 if (skb->ip_summed == 0) {
  __wsum csum = 0;
  if (csum_and_copy_from_iter_full(skb_put(skb, copy), copy,
              &csum, from)) {
   skb->csum = csum_block_add(skb->csum, csum, off);
   return 0;
  }
 } else if (copy_from_iter_full(skb_put(skb, copy), copy, from))
  return 0;

 __skb_trim(skb, off);
 return -14;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_can_coalesce(struct sk_buff *skb, int i,
        const struct page *page, int off)
{
 if (skb_zcopy(skb))
  return false;
 if (i) {
  const skb_frag_t *frag = &((struct skb_shared_info *)(skb_end_pointer(skb)))->frags[i - 1];

  return page == skb_frag_page(frag) &&
         off == skb_frag_off(frag) + skb_frag_size(frag);
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __skb_linearize(struct sk_buff *skb)
{
 return __pskb_pull_tail(skb, skb->data_len) ? 0 : -12;
}
# 3790 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_linearize(struct sk_buff *skb)
{
 return skb_is_nonlinear(skb) ? __skb_linearize(skb) : 0;
}
# 3802 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_has_shared_frag(const struct sk_buff *skb)
{
 return skb_is_nonlinear(skb) &&
        ((struct skb_shared_info *)(skb_end_pointer(skb)))->flags & SKBFL_SHARED_FRAG;
}
# 3815 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_linearize_cow(struct sk_buff *skb)
{
 return skb_is_nonlinear(skb) || skb_cloned(skb) ?
        __skb_linearize(skb) : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
       unsigned int off)
{
 if (skb->ip_summed == 2)
  skb->csum = csum_block_sub(skb->csum,
        csum_partial(start, len, 0), off);
 else if (skb->ip_summed == 3 &&
   skb_checksum_start_offset(skb) < 0)
  skb->ip_summed = 0;
}
# 3843 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_postpull_rcsum(struct sk_buff *skb,
          const void *start, unsigned int len)
{
 if (skb->ip_summed == 2)
  skb->csum = wsum_negate(csum_partial(start, len,
           wsum_negate(skb->csum)));
 else if (skb->ip_summed == 3 &&
   skb_checksum_start_offset(skb) < 0)
  skb->ip_summed = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
__skb_postpush_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
       unsigned int off)
{
 if (skb->ip_summed == 2)
  skb->csum = csum_block_add(skb->csum,
        csum_partial(start, len, 0), off);
}
# 3872 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_postpush_rcsum(struct sk_buff *skb,
          const void *start, unsigned int len)
{
 __skb_postpush_rcsum(skb, start, len, 0);
}

void *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
# 3891 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_push_rcsum(struct sk_buff *skb, unsigned int len)
{
 skb_push(skb, len);
 skb_postpush_rcsum(skb, skb->data, len);
 return skb->data;
}

int pskb_trim_rcsum_slow(struct sk_buff *skb, unsigned int len);
# 3909 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
{
 if (__builtin_expect(!!(len >= skb->len), 1))
  return 0;
 return pskb_trim_rcsum_slow(skb, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __skb_trim_rcsum(struct sk_buff *skb, unsigned int len)
{
 if (skb->ip_summed == 2)
  skb->ip_summed = 0;
 __skb_trim(skb, len);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __skb_grow_rcsum(struct sk_buff *skb, unsigned int len)
{
 if (skb->ip_summed == 2)
  skb->ip_summed = 0;
 return __skb_grow(skb, len);
}
# 3983 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_has_frag_list(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->frag_list != ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_frag_list_init(struct sk_buff *skb)
{
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->frag_list = ((void *)0);
}





int __skb_wait_for_more_packets(struct sock *sk, struct sk_buff_head *queue,
    int *err, long *timeo_p,
    const struct sk_buff *skb);
struct sk_buff *__skb_try_recv_from_queue(struct sock *sk,
       struct sk_buff_head *queue,
       unsigned int flags,
       int *off, int *err,
       struct sk_buff **last);
struct sk_buff *__skb_try_recv_datagram(struct sock *sk,
     struct sk_buff_head *queue,
     unsigned int flags, int *off, int *err,
     struct sk_buff **last);
struct sk_buff *__skb_recv_datagram(struct sock *sk,
        struct sk_buff_head *sk_queue,
        unsigned int flags, int *off, int *err);
struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned int flags, int *err);
__poll_t datagram_poll(struct file *file, struct socket *sock,
      struct poll_table_struct *wait);
int skb_copy_datagram_iter(const struct sk_buff *from, int offset,
      struct iov_iter *to, int size);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_copy_datagram_msg(const struct sk_buff *from, int offset,
     struct msghdr *msg, int size)
{
 return skb_copy_datagram_iter(from, offset, &msg->msg_iter, size);
}
int skb_copy_and_csum_datagram_msg(struct sk_buff *skb, int hlen,
       struct msghdr *msg);
int skb_copy_and_hash_datagram_iter(const struct sk_buff *skb, int offset,
      struct iov_iter *to, int len,
      struct ahash_request *hash);
int skb_copy_datagram_from_iter(struct sk_buff *skb, int offset,
     struct iov_iter *from, int len);
int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *frm);
void skb_free_datagram(struct sock *sk, struct sk_buff *skb);
int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags);
int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len);
int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len);
__wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, u8 *to,
         int len);
int skb_splice_bits(struct sk_buff *skb, struct sock *sk, unsigned int offset,
      struct pipe_inode_info *pipe, unsigned int len,
      unsigned int flags);
int skb_send_sock_locked(struct sock *sk, struct sk_buff *skb, int offset,
    int len);
int skb_send_sock(struct sock *sk, struct sk_buff *skb, int offset, int len);
void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to);
unsigned int skb_zerocopy_headlen(const struct sk_buff *from);
int skb_zerocopy(struct sk_buff *to, struct sk_buff *from,
   int len, int hlen);
void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len);
int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen);
void skb_scrub_packet(struct sk_buff *skb, bool xnet);
struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features);
struct sk_buff *skb_segment_list(struct sk_buff *skb, netdev_features_t features,
     unsigned int offset);
struct sk_buff *skb_vlan_untag(struct sk_buff *skb);
int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len);
int skb_ensure_writable_head_tail(struct sk_buff *skb, struct net_device *dev);
int __skb_vlan_pop(struct sk_buff *skb, u16 *vlan_tci);
int skb_vlan_pop(struct sk_buff *skb);
int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci);
int skb_eth_pop(struct sk_buff *skb);
int skb_eth_push(struct sk_buff *skb, const unsigned char *dst,
   const unsigned char *src);
int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
    int mac_len, bool ethernet);
int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len,
   bool ethernet);
int skb_mpls_update_lse(struct sk_buff *skb, __be32 mpls_lse);
int skb_mpls_dec_ttl(struct sk_buff *skb);
struct sk_buff *pskb_extract(struct sk_buff *skb, int off, int to_copy,
        gfp_t gfp);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int memcpy_from_msg(void *data, struct msghdr *msg, int len)
{
 return copy_from_iter_full(data, len, &msg->msg_iter) ? 0 : -14;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int memcpy_to_msg(struct msghdr *msg, void *data, int len)
{
 return copy_to_iter(data, len, &msg->msg_iter) == len ? 0 : -14;
}

struct skb_checksum_ops {
 __wsum (*update)(const void *mem, int len, __wsum wsum);
 __wsum (*combine)(__wsum csum, __wsum csum2, int offset, int len);
};

extern const struct skb_checksum_ops *crc32c_csum_stub ;

__wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
        __wsum csum, const struct skb_checksum_ops *ops);
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
      __wsum csum);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * __attribute__((__warn_unused_result__))
__skb_header_pointer(const struct sk_buff *skb, int offset, int len,
       const void *data, int hlen, void *buffer)
{
 if (__builtin_expect(!!(hlen - offset >= len), 1))
  return (void *)data + offset;

 if (!skb || __builtin_expect(!!(skb_copy_bits(skb, offset, buffer, len) < 0), 0))
  return ((void *)0);

 return buffer;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * __attribute__((__warn_unused_result__))
skb_header_pointer(const struct sk_buff *skb, int offset, int len, void *buffer)
{
 return __skb_header_pointer(skb, offset, len, skb->data,
        skb_headlen(skb), buffer);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void * __attribute__((__warn_unused_result__))
skb_pointer_if_linear(const struct sk_buff *skb, int offset, int len)
{
 if (__builtin_expect(!!(skb_headlen(skb) - offset >= len), 1))
  return skb->data + offset;
 return ((void *)0);
}
# 4130 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_needs_linearize(struct sk_buff *skb,
           netdev_features_t features)
{
 return skb_is_nonlinear(skb) &&
        ((skb_has_frag_list(skb) && !(features & ((netdev_features_t)1 << (NETIF_F_FRAGLIST_BIT)))) ||
  (((struct skb_shared_info *)(skb_end_pointer(skb)))->nr_frags && !(features & ((netdev_features_t)1 << (NETIF_F_SG_BIT)))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_from_linear_data(const struct sk_buff *skb,
          void *to,
          const unsigned int len)
{
 memcpy(to, skb->data, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_from_linear_data_offset(const struct sk_buff *skb,
          const int offset, void *to,
          const unsigned int len)
{
 memcpy(to, skb->data + offset, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_to_linear_data(struct sk_buff *skb,
        const void *from,
        const unsigned int len)
{
 memcpy(skb->data, from, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_to_linear_data_offset(struct sk_buff *skb,
        const int offset,
        const void *from,
        const unsigned int len)
{
 memcpy(skb->data + offset, from, len);
}

void skb_init(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t skb_get_ktime(const struct sk_buff *skb)
{
 return skb->tstamp;
}
# 4183 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_get_timestamp(const struct sk_buff *skb,
         struct __kernel_old_timeval *stamp)
{
 *stamp = ns_to_kernel_old_timeval(skb->tstamp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_get_new_timestamp(const struct sk_buff *skb,
      struct __kernel_sock_timeval *stamp)
{
 struct timespec64 ts = ns_to_timespec64((skb->tstamp));

 stamp->tv_sec = ts.tv_sec;
 stamp->tv_usec = ts.tv_nsec / 1000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_get_timestampns(const struct sk_buff *skb,
           struct __kernel_old_timespec *stamp)
{
 struct timespec64 ts = ns_to_timespec64((skb->tstamp));

 stamp->tv_sec = ts.tv_sec;
 stamp->tv_nsec = ts.tv_nsec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_get_new_timestampns(const struct sk_buff *skb,
        struct __kernel_timespec *stamp)
{
 struct timespec64 ts = ns_to_timespec64((skb->tstamp));

 stamp->tv_sec = ts.tv_sec;
 stamp->tv_nsec = ts.tv_nsec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __net_timestamp(struct sk_buff *skb)
{
 skb->tstamp = ktime_get_real();
 skb->tstamp_type = SKB_CLOCK_REALTIME;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t net_timedelta(ktime_t t)
{
 return ((ktime_get_real()) - (t));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_delivery_time(struct sk_buff *skb, ktime_t kt,
      u8 tstamp_type)
{
 skb->tstamp = kt;

 if (kt)
  skb->tstamp_type = tstamp_type;
 else
  skb->tstamp_type = SKB_CLOCK_REALTIME;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_delivery_type_by_clockid(struct sk_buff *skb,
          ktime_t kt, clockid_t clockid)
{
 u8 tstamp_type = SKB_CLOCK_REALTIME;

 switch (clockid) {
 case 0:
  break;
 case 1:
  tstamp_type = SKB_CLOCK_MONOTONIC;
  break;
 case 11:
  tstamp_type = SKB_CLOCK_TAI;
  break;
 default:
  ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 4253, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  kt = 0;
 }

 skb_set_delivery_time(skb, kt, tstamp_type);
}

extern struct static_key_false netstamp_needed_key;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_clear_delivery_time(struct sk_buff *skb)
{
 if (skb->tstamp_type) {
  skb->tstamp_type = SKB_CLOCK_REALTIME;
  if (__builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&netstamp_needed_key)->key) > 0; })), 0))
   skb->tstamp = ktime_get_real();
  else
   skb->tstamp = 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_clear_tstamp(struct sk_buff *skb)
{
 if (skb->tstamp_type)
  return;

 skb->tstamp = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t skb_tstamp(const struct sk_buff *skb)
{
 if (skb->tstamp_type)
  return 0;

 return skb->tstamp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t skb_tstamp_cond(const struct sk_buff *skb, bool cond)
{
 if (skb->tstamp_type != SKB_CLOCK_MONOTONIC && skb->tstamp)
  return skb->tstamp;

 if (__builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&netstamp_needed_key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&netstamp_needed_key)->key) > 0; })), 0) || cond)
  return ktime_get_real();

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 skb_metadata_len(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->meta_len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_metadata_end(const struct sk_buff *skb)
{
 return skb_mac_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __skb_metadata_differs(const struct sk_buff *skb_a,
       const struct sk_buff *skb_b,
       u8 meta_len)
{
 const void *a = skb_metadata_end(skb_a);
 const void *b = skb_metadata_end(skb_b);
 u64 diffs = 0;

 if (!0 ||
     32 != 64)
  goto slow;


 switch (meta_len) {


 case 32: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 24: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 16: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 8: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  break;
 case 28: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 20: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 12: diffs |= (*(u64 *)(a -= sizeof(u64))) ^ (*(u64 *)(b -= sizeof(u64)));
  __attribute__((__fallthrough__));
 case 4: diffs |= (*(u32 *)(a -= sizeof(u32))) ^ (*(u32 *)(b -= sizeof(u32)));
  break;
 default:
slow:
  return memcmp(a - meta_len, b - meta_len, meta_len);
 }
 return diffs;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_metadata_differs(const struct sk_buff *skb_a,
     const struct sk_buff *skb_b)
{
 u8 len_a = skb_metadata_len(skb_a);
 u8 len_b = skb_metadata_len(skb_b);

 if (!(len_a | len_b))
  return false;

 return len_a != len_b ?
        true : __skb_metadata_differs(skb_a, skb_b, len_a);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_metadata_set(struct sk_buff *skb, u8 meta_len)
{
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->meta_len = meta_len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_metadata_clear(struct sk_buff *skb)
{
 skb_metadata_set(skb, 0);
}

struct sk_buff *skb_clone_sk(struct sk_buff *skb);
# 4384 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_clone_tx_timestamp(struct sk_buff *skb)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_defer_rx_timestamp(struct sk_buff *skb)
{
 return false;
}
# 4407 "../include/linux/skbuff.h"
void skb_complete_tx_timestamp(struct sk_buff *skb,
          struct skb_shared_hwtstamps *hwtstamps);

void __skb_tstamp_tx(struct sk_buff *orig_skb, const struct sk_buff *ack_skb,
       struct skb_shared_hwtstamps *hwtstamps,
       struct sock *sk, int tstype);
# 4425 "../include/linux/skbuff.h"
void skb_tstamp_tx(struct sk_buff *orig_skb,
     struct skb_shared_hwtstamps *hwtstamps);
# 4440 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_tx_timestamp(struct sk_buff *skb)
{
 skb_clone_tx_timestamp(skb);
 if (((struct skb_shared_info *)(skb_end_pointer(skb)))->tx_flags & SKBTX_SW_TSTAMP)
  skb_tstamp_tx(skb, ((void *)0));
}
# 4454 "../include/linux/skbuff.h"
void skb_complete_wifi_ack(struct sk_buff *skb, bool acked);

__sum16 __skb_checksum_complete_head(struct sk_buff *skb, int len);
__sum16 __skb_checksum_complete(struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_csum_unnecessary(const struct sk_buff *skb)
{
 return ((skb->ip_summed == 1) ||
  skb->csum_valid ||
  (skb->ip_summed == 3 &&
   skb_checksum_start_offset(skb) >= 0));
}
# 4483 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __sum16 skb_checksum_complete(struct sk_buff *skb)
{
 return skb_csum_unnecessary(skb) ?
        0 : __skb_checksum_complete(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_decr_checksum_unnecessary(struct sk_buff *skb)
{
 if (skb->ip_summed == 1) {
  if (skb->csum_level == 0)
   skb->ip_summed = 0;
  else
   skb->csum_level--;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_incr_checksum_unnecessary(struct sk_buff *skb)
{
 if (skb->ip_summed == 1) {
  if (skb->csum_level < 3)
   skb->csum_level++;
 } else if (skb->ip_summed == 0) {
  skb->ip_summed = 1;
  skb->csum_level = 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_reset_checksum_unnecessary(struct sk_buff *skb)
{
 if (skb->ip_summed == 1) {
  skb->ip_summed = 0;
  skb->csum_level = 0;
 }
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __skb_checksum_validate_needed(struct sk_buff *skb,
        bool zero_okay,
        __sum16 check)
{
 if (skb_csum_unnecessary(skb) || (zero_okay && !check)) {
  skb->csum_valid = 1;
  __skb_decr_checksum_unnecessary(skb);
  return false;
 }

 return true;
}
# 4547 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_checksum_complete_unset(struct sk_buff *skb)
{
 if (skb->ip_summed == 2)
  skb->ip_summed = 0;
}
# 4562 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __sum16 __skb_checksum_validate_complete(struct sk_buff *skb,
             bool complete,
             __wsum psum)
{
 if (skb->ip_summed == 2) {
  if (!csum_fold(csum_add(psum, skb->csum))) {
   skb->csum_valid = 1;
   return 0;
  }
 }

 skb->csum = psum;

 if (complete || skb->len <= 76) {
  __sum16 csum;

  csum = __skb_checksum_complete(skb);
  skb->csum_valid = !csum;
  return csum;
 }

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __wsum null_compute_pseudo(struct sk_buff *skb, int proto)
{
 return 0;
}
# 4628 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __skb_checksum_convert_check(struct sk_buff *skb)
{
 return (skb->ip_summed == 0 && skb->csum_valid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_checksum_convert(struct sk_buff *skb, __wsum pseudo)
{
 skb->csum = ~pseudo;
 skb->ip_summed = 2;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_remcsum_adjust_partial(struct sk_buff *skb, void *ptr,
           u16 start, u16 offset)
{
 skb->ip_summed = 3;
 skb->csum_start = ((unsigned char *)ptr + start) - skb->head;
 skb->csum_offset = offset - start;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_remcsum_process(struct sk_buff *skb, void *ptr,
           int start, int offset, bool nopartial)
{
 __wsum delta;

 if (!nopartial) {
  skb_remcsum_adjust_partial(skb, ptr, start, offset);
  return;
 }

 if (__builtin_expect(!!(skb->ip_summed != 2), 0)) {
  __skb_checksum_complete(skb);
  skb_postpull_rcsum(skb, skb->data, ptr - (void *)skb->data);
 }

 delta = remcsum_adjust(ptr, skb->csum, start, offset);


 skb->csum = csum_add(skb->csum, delta);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nf_conntrack *skb_nfct(const struct sk_buff *skb)
{



 return ((void *)0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long skb_get_nfct(const struct sk_buff *skb)
{



 return 0UL;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_nfct(struct sk_buff *skb, unsigned long nfct)
{




}


enum skb_ext_id {




 SKB_EXT_SEC_PATH,


 TC_SKB_EXT,





 SKB_EXT_MCTP,

 SKB_EXT_NUM,
};
# 4735 "../include/linux/skbuff.h"
struct skb_ext {
 refcount_t refcnt;
 u8 offset[SKB_EXT_NUM];
 u8 chunks;
 char data[] __attribute__((__aligned__(8)));
};

struct skb_ext *__skb_ext_alloc(gfp_t flags);
void *__skb_ext_set(struct sk_buff *skb, enum skb_ext_id id,
      struct skb_ext *ext);
void *skb_ext_add(struct sk_buff *skb, enum skb_ext_id id);
void __skb_ext_del(struct sk_buff *skb, enum skb_ext_id id);
void __skb_ext_put(struct skb_ext *ext);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_ext_put(struct sk_buff *skb)
{
 if (skb->active_extensions)
  __skb_ext_put(skb->extensions);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_ext_copy(struct sk_buff *dst,
      const struct sk_buff *src)
{
 dst->active_extensions = src->active_extensions;

 if (src->active_extensions) {
  struct skb_ext *ext = src->extensions;

  refcount_inc(&ext->refcnt);
  dst->extensions = ext;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_ext_copy(struct sk_buff *dst, const struct sk_buff *src)
{
 skb_ext_put(dst);
 __skb_ext_copy(dst, src);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __skb_ext_exist(const struct skb_ext *ext, enum skb_ext_id i)
{
 return !!ext->offset[i];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_ext_exist(const struct sk_buff *skb, enum skb_ext_id id)
{
 return skb->active_extensions & (1 << id);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_ext_del(struct sk_buff *skb, enum skb_ext_id id)
{
 if (skb_ext_exist(skb, id))
  __skb_ext_del(skb, id);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *skb_ext_find(const struct sk_buff *skb, enum skb_ext_id id)
{
 if (skb_ext_exist(skb, id)) {
  struct skb_ext *ext = skb->extensions;

  return (void *)ext + (ext->offset[id] << 3);
 }

 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_ext_reset(struct sk_buff *skb)
{
 if (__builtin_expect(!!(skb->active_extensions), 0)) {
  __skb_ext_put(skb->extensions);
  skb->active_extensions = 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_has_extensions(struct sk_buff *skb)
{
 return __builtin_expect(!!(skb->active_extensions), 0);
}
# 4822 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nf_reset_ct(struct sk_buff *skb)
{




}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nf_reset_trace(struct sk_buff *skb)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipvs_reset(struct sk_buff *skb)
{



}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __nf_copy(struct sk_buff *dst, const struct sk_buff *src,
        bool copy)
{
# 4856 "../include/linux/skbuff.h"
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nf_copy(struct sk_buff *dst, const struct sk_buff *src)
{



 dst->slow_gro = src->slow_gro;
 __nf_copy(dst, src, true);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_secmark(struct sk_buff *to, const struct sk_buff *from)
{
 to->secmark = from->secmark;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_init_secmark(struct sk_buff *skb)
{
 skb->secmark = 0;
}
# 4885 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int secpath_exists(const struct sk_buff *skb)
{

 return skb_ext_exist(skb, SKB_EXT_SEC_PATH);



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_irq_freeable(const struct sk_buff *skb)
{
 return !skb->destructor &&
  !secpath_exists(skb) &&
  !skb_nfct(skb) &&
  !skb->_skb_refdst &&
  !skb_has_frag_list(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_queue_mapping(struct sk_buff *skb, u16 queue_mapping)
{
 skb->queue_mapping = queue_mapping;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 skb_get_queue_mapping(const struct sk_buff *skb)
{
 return skb->queue_mapping;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_copy_queue_mapping(struct sk_buff *to, const struct sk_buff *from)
{
 to->queue_mapping = from->queue_mapping;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_record_rx_queue(struct sk_buff *skb, u16 rx_queue)
{
 skb->queue_mapping = rx_queue + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 skb_get_rx_queue(const struct sk_buff *skb)
{
 return skb->queue_mapping - 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_rx_queue_recorded(const struct sk_buff *skb)
{
 return skb->queue_mapping != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_dst_pending_confirm(struct sk_buff *skb, u32 val)
{
 skb->dst_pending_confirm = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_get_dst_pending_confirm(const struct sk_buff *skb)
{
 return skb->dst_pending_confirm != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sec_path *skb_sec_path(const struct sk_buff *skb)
{

 return skb_ext_find(skb, SKB_EXT_SEC_PATH);



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_gso(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_size;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_gso_v6(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & SKB_GSO_TCPV6;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_gso_sctp(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & SKB_GSO_SCTP;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_gso_tcp(const struct sk_buff *skb)
{
 return ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_gso_reset(struct sk_buff *skb)
{
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_size = 0;
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_segs = 0;
 ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_increase_gso_size(struct skb_shared_info *shinfo,
      u16 increment)
{
 if (({ bool __ret_do_once = !!(shinfo->gso_size == 0xFFFF); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 4985, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return;
 shinfo->gso_size += increment;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_decrease_gso_size(struct skb_shared_info *shinfo,
      u16 decrement)
{
 if (({ bool __ret_do_once = !!(shinfo->gso_size == 0xFFFF); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 4993, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return;
 shinfo->gso_size -= decrement;
}

void __skb_warn_lro_forwarding(const struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_warn_if_lro(const struct sk_buff *skb)
{


 const struct skb_shared_info *shinfo = ((struct skb_shared_info *)(skb_end_pointer(skb)));

 if (skb_is_nonlinear(skb) && shinfo->gso_size != 0 &&
     __builtin_expect(!!(shinfo->gso_type == 0), 0)) {
  __skb_warn_lro_forwarding(skb);
  return true;
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_forward_csum(struct sk_buff *skb)
{

 if (skb->ip_summed == 2)
  skb->ip_summed = 0;
}
# 5029 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_checksum_none_assert(const struct sk_buff *skb)
{
 (void)({ bool __ret_do_once = !!(skb->ip_summed != 0); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/skbuff.h", 5031, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
}

bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off);

int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
struct sk_buff *skb_checksum_trimmed(struct sk_buff *skb,
         unsigned int transport_len,
         __sum16(*skb_chkf)(struct sk_buff *skb));
# 5050 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_head_is_locked(const struct sk_buff *skb)
{
 return !skb->head_frag || skb_cloned(skb);
}
# 5064 "../include/linux/skbuff.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __wsum lco_csum(struct sk_buff *skb)
{
 unsigned char *csum_start = skb_checksum_start(skb);
 unsigned char *l4_hdr = skb_transport_header(skb);
 __wsum partial;


 partial = ~csum_unfold(*( __sum16 *)(csum_start +
          skb->csum_offset));




 return csum_partial(l4_hdr, csum_start - l4_hdr, partial);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_is_redirected(const struct sk_buff *skb)
{
 return skb->redirected;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
{
 skb->redirected = 1;





}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_redirect(struct sk_buff *skb)
{
 skb->redirected = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_redirected_noclear(struct sk_buff *skb,
           bool from_ingress)
{
 skb->redirected = 1;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_csum_is_sctp(struct sk_buff *skb)
{

 return skb->csum_not_inet;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_reset_csum_not_inet(struct sk_buff *skb)
{
 skb->ip_summed = 0;

 skb->csum_not_inet = 0;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_kcov_handle(struct sk_buff *skb,
           const u64 kcov_handle)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 skb_get_kcov_handle(struct sk_buff *skb)
{



 return 0;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_mark_for_recycle(struct sk_buff *skb)
{

 skb->pp_recycle = 1;

}

ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter,
        ssize_t maxsize, gfp_t gfp);
# 20 "../include/linux/if_ether.h" 2


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ethhdr *eth_hdr(const struct sk_buff *skb)
{
 return (struct ethhdr *)skb_mac_header(skb);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ethhdr *skb_eth_hdr(const struct sk_buff *skb)
{
 return (struct ethhdr *)skb->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ethhdr *inner_eth_hdr(const struct sk_buff *skb)
{
 return (struct ethhdr *)skb_inner_mac_header(skb);
}

int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr);

extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len);
# 19 "../include/linux/ethtool.h" 2
# 1 "../include/linux/netlink.h" 1








# 1 "../include/net/scm.h" 1





# 1 "../include/linux/net.h" 1
# 25 "../include/linux/net.h"
# 1 "../include/linux/sockptr.h" 1
# 14 "../include/linux/sockptr.h"
typedef struct {
 union {
  void *kernel;
  void *user;
 };
 bool is_kernel : 1;
} sockptr_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sockptr_is_kernel(sockptr_t sockptr)
{
 return sockptr.is_kernel;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) sockptr_t KERNEL_SOCKPTR(void *p)
{
 return (sockptr_t) { .kernel = p, .is_kernel = true };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) sockptr_t USER_SOCKPTR(void *p)
{
 return (sockptr_t) { .user = p };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sockptr_is_null(sockptr_t sockptr)
{
 if (sockptr_is_kernel(sockptr))
  return !sockptr.kernel;
 return !sockptr.user;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_from_sockptr_offset(void *dst, sockptr_t src,
  size_t offset, size_t size)
{
 if (!sockptr_is_kernel(src))
  return copy_from_user(dst, src.user + offset, size);
 memcpy(dst, src.kernel + offset, size);
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_from_sockptr(void *dst, sockptr_t src, size_t size)
{
 return copy_from_sockptr_offset(dst, src, 0, size);
}
# 75 "../include/linux/sockptr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_safe_from_sockptr(void *dst, size_t ksize,
      sockptr_t optval, unsigned int optlen)
{
 if (optlen < ksize)
  return -22;
 return copy_from_sockptr(dst, optval, ksize);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_struct_from_sockptr(void *dst, size_t ksize,
  sockptr_t src, size_t usize)
{
 size_t size = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((ksize) - (usize)) * 0l)) : (int *)8))), ((ksize) < (usize) ? (ksize) : (usize)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1)) * 0l)) : (int *)8))), (((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1)) * 0l)) : (int *)8))), (((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(ksize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0), ksize, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(usize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0), usize, -1) >= 0)), "min" "(" "ksize" ", " "usize" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_293 = (ksize); __auto_type __UNIQUE_ID_y_294 = (usize); ((__UNIQUE_ID_x_293) < (__UNIQUE_ID_y_294) ? (__UNIQUE_ID_x_293) : (__UNIQUE_ID_y_294)); }); }));
 size_t rest = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((ksize) - (usize)) * 0l)) : (int *)8))), ((ksize) > (usize) ? (ksize) : (usize)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1)) * 0l)) : (int *)8))), (((typeof((ksize) + 0))(-1)) < ( typeof((ksize) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1)) * 0l)) : (int *)8))), (((typeof((usize) + 0))(-1)) < ( typeof((usize) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(ksize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(ksize))(-1)) < ( typeof(ksize))1)) * 0l)) : (int *)8))), (((typeof(ksize))(-1)) < ( typeof(ksize))1), 0), ksize, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(usize) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(usize))(-1)) < ( typeof(usize))1)) * 0l)) : (int *)8))), (((typeof(usize))(-1)) < ( typeof(usize))1), 0), usize, -1) >= 0)), "max" "(" "ksize" ", " "usize" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_295 = (ksize); __auto_type __UNIQUE_ID_y_296 = (usize); ((__UNIQUE_ID_x_295) > (__UNIQUE_ID_y_296) ? (__UNIQUE_ID_x_295) : (__UNIQUE_ID_y_296)); }); })) - size;

 if (!sockptr_is_kernel(src))
  return copy_struct_from_user(dst, ksize, src.user, size);

 if (usize < ksize) {
  memset(dst + size, 0, rest);
 } else if (usize > ksize) {
  char *p = src.kernel;

  while (rest--) {
   if (*p++)
    return -7;
  }
 }
 memcpy(dst, src.kernel, size);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_to_sockptr_offset(sockptr_t dst, size_t offset,
  const void *src, size_t size)
{
 if (!sockptr_is_kernel(dst))
  return copy_to_user(dst.user + offset, src, size);
 memcpy(dst.kernel + offset, src, size);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_to_sockptr(sockptr_t dst, const void *src, size_t size)
{
 return copy_to_sockptr_offset(dst, 0, src, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *memdup_sockptr_noprof(sockptr_t src, size_t len)
{
 void *p = __kmalloc_node_track_caller_noprof((len), ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))) | (( gfp_t)((((1UL))) << (___GFP_HARDWALL_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT))), (-1), (unsigned long)__builtin_return_address(0));

 if (!p)
  return ERR_PTR(-12);
 if (copy_from_sockptr(p, src, len)) {
  kfree(p);
  return ERR_PTR(-14);
 }
 return p;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *memdup_sockptr_nul_noprof(sockptr_t src, size_t len)
{
 char *p = __kmalloc_node_track_caller_noprof((len + 1), ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT)))), (-1), (unsigned long)__builtin_return_address(0));

 if (!p)
  return ERR_PTR(-12);
 if (copy_from_sockptr(p, src, len)) {
  kfree(p);
  return ERR_PTR(-14);
 }
 p[len] = '\0';
 return p;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long strncpy_from_sockptr(char *dst, sockptr_t src, size_t count)
{
 if (sockptr_is_kernel(src)) {
  size_t len = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((strnlen(src.kernel, count - 1) + 1) - (count)) * 0l)) : (int *)8))), ((strnlen(src.kernel, count - 1) + 1) < (count) ? (strnlen(src.kernel, count - 1) + 1) : (count)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(strnlen(src.kernel, count - 1) + 1))(-1)) < ( typeof(strnlen(src.kernel, count - 1) + 1))1)) * 0l)) : (int *)8))), (((typeof(strnlen(src.kernel, count - 1) + 1))(-1)) < ( typeof(strnlen(src.kernel, count - 1) + 1))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(count))(-1)) < ( typeof(count))1)) * 0l)) : (int *)8))), (((typeof(count))(-1)) < ( typeof(count))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((strnlen(src.kernel, count - 1) + 1) + 0))(-1)) < ( typeof((strnlen(src.kernel, count - 1) + 1) + 0))1)) * 0l)) : (int *)8))), (((typeof((strnlen(src.kernel, count - 1) + 1) + 0))(-1)) < ( typeof((strnlen(src.kernel, count - 1) + 1) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((count) + 0))(-1)) < ( typeof((count) + 0))1)) * 0l)) : (int *)8))), (((typeof((count) + 0))(-1)) < ( typeof((count) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(strnlen(src.kernel, count - 1) + 1) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(strnlen(src.kernel, count - 1) + 1))(-1)) < ( typeof(strnlen(src.kernel, count - 1) + 1))1)) * 0l)) : (int *)8))), (((typeof(strnlen(src.kernel, count - 1) + 1))(-1)) < ( typeof(strnlen(src.kernel, count - 1) + 1))1), 0), strnlen(src.kernel, count - 1) + 1, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(count) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(count))(-1)) < ( typeof(count))1)) * 0l)) : (int *)8))), (((typeof(count))(-1)) < ( typeof(count))1), 0), count, -1) >= 0)), "min" "(" "strnlen(src.kernel, count - 1) + 1" ", " "count" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_297 = (strnlen(src.kernel, count - 1) + 1); __auto_type __UNIQUE_ID_y_298 = (count); ((__UNIQUE_ID_x_297) < (__UNIQUE_ID_y_298) ? (__UNIQUE_ID_x_297) : (__UNIQUE_ID_y_298)); }); }));

  memcpy(dst, src.kernel, len);
  return len;
 }
 return strncpy_from_user(dst, src.user, count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int check_zeroed_sockptr(sockptr_t src, size_t offset,
           size_t size)
{
 if (!sockptr_is_kernel(src))
  return check_zeroed_user(src.user + offset, size);
 return memchr_inv(src.kernel + offset, 0, size) == ((void *)0);
}
# 26 "../include/linux/net.h" 2

# 1 "../include/uapi/linux/net.h" 1
# 23 "../include/uapi/linux/net.h"
# 1 "./arch/hexagon/include/generated/uapi/asm/socket.h" 1
# 24 "../include/uapi/linux/net.h" 2
# 48 "../include/uapi/linux/net.h"
typedef enum {
 SS_FREE = 0,
 SS_UNCONNECTED,
 SS_CONNECTING,
 SS_CONNECTED,
 SS_DISCONNECTING
} socket_state;
# 28 "../include/linux/net.h" 2

struct poll_table_struct;
struct pipe_inode_info;
struct inode;
struct file;
struct net;
# 64 "../include/linux/net.h"
enum sock_type {
 SOCK_STREAM = 1,
 SOCK_DGRAM = 2,
 SOCK_RAW = 3,
 SOCK_RDM = 4,
 SOCK_SEQPACKET = 5,
 SOCK_DCCP = 6,
 SOCK_PACKET = 10,
};
# 93 "../include/linux/net.h"
enum sock_shutdown_cmd {
 SHUT_RD,
 SHUT_WR,
 SHUT_RDWR,
};

struct socket_wq {

 wait_queue_head_t wait;
 struct fasync_struct *fasync_list;
 unsigned long flags;
 struct callback_head rcu;
} ;
# 117 "../include/linux/net.h"
struct socket {
 socket_state state;

 short type;

 unsigned long flags;

 struct file *file;
 struct sock *sk;
 const struct proto_ops *ops;

 struct socket_wq wq;
};
# 140 "../include/linux/net.h"
typedef struct {
 size_t written;
 size_t count;
 union {
  char *buf;
  void *data;
 } arg;
 int error;
} read_descriptor_t;

struct vm_area_struct;
struct page;
struct sockaddr;
struct msghdr;
struct module;
struct sk_buff;
struct proto_accept_arg;
typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *,
          unsigned int, size_t);
typedef int (*skb_read_actor_t)(struct sock *, struct sk_buff *);


struct proto_ops {
 int family;
 struct module *owner;
 int (*release) (struct socket *sock);
 int (*bind) (struct socket *sock,
          struct sockaddr *myaddr,
          int sockaddr_len);
 int (*connect) (struct socket *sock,
          struct sockaddr *vaddr,
          int sockaddr_len, int flags);
 int (*socketpair)(struct socket *sock1,
          struct socket *sock2);
 int (*accept) (struct socket *sock,
          struct socket *newsock,
          struct proto_accept_arg *arg);
 int (*getname) (struct socket *sock,
          struct sockaddr *addr,
          int peer);
 __poll_t (*poll) (struct file *file, struct socket *sock,
          struct poll_table_struct *wait);
 int (*ioctl) (struct socket *sock, unsigned int cmd,
          unsigned long arg);




 int (*gettstamp) (struct socket *sock, void *userstamp,
          bool timeval, bool time32);
 int (*listen) (struct socket *sock, int len);
 int (*shutdown) (struct socket *sock, int flags);
 int (*setsockopt)(struct socket *sock, int level,
          int optname, sockptr_t optval,
          unsigned int optlen);
 int (*getsockopt)(struct socket *sock, int level,
          int optname, char *optval, int *optlen);
 void (*show_fdinfo)(struct seq_file *m, struct socket *sock);
 int (*sendmsg) (struct socket *sock, struct msghdr *m,
          size_t total_len);
# 208 "../include/linux/net.h"
 int (*recvmsg) (struct socket *sock, struct msghdr *m,
          size_t total_len, int flags);
 int (*mmap) (struct file *file, struct socket *sock,
          struct vm_area_struct * vma);
 ssize_t (*splice_read)(struct socket *sock, loff_t *ppos,
           struct pipe_inode_info *pipe, size_t len, unsigned int flags);
 void (*splice_eof)(struct socket *sock);
 int (*set_peek_off)(struct sock *sk, int val);
 int (*peek_len)(struct socket *sock);




 int (*read_sock)(struct sock *sk, read_descriptor_t *desc,
         sk_read_actor_t recv_actor);

 int (*read_skb)(struct sock *sk, skb_read_actor_t recv_actor);
 int (*sendmsg_locked)(struct sock *sk, struct msghdr *msg,
       size_t size);
 int (*set_rcvlowat)(struct sock *sk, int val);
};




struct net_proto_family {
 int family;
 int (*create)(struct net *net, struct socket *sock,
      int protocol, int kern);
 struct module *owner;
};

struct iovec;
struct kvec;

enum {
 SOCK_WAKE_IO,
 SOCK_WAKE_WAITD,
 SOCK_WAKE_SPACE,
 SOCK_WAKE_URG,
};

int sock_wake_async(struct socket_wq *sk_wq, int how, int band);
int sock_register(const struct net_proto_family *fam);
void sock_unregister(int family);
bool sock_is_registered(int family);
int __sock_create(struct net *net, int family, int type, int proto,
    struct socket **res, int kern);
int sock_create(int family, int type, int proto, struct socket **res);
int sock_create_kern(struct net *net, int family, int type, int proto, struct socket **res);
int sock_create_lite(int family, int type, int proto, struct socket **res);
struct socket *sock_alloc(void);
void sock_release(struct socket *sock);
int sock_sendmsg(struct socket *sock, struct msghdr *msg);
int sock_recvmsg(struct socket *sock, struct msghdr *msg, int flags);
struct file *sock_alloc_file(struct socket *sock, int flags, const char *dname);
struct socket *sockfd_lookup(int fd, int *err);
struct socket *sock_from_file(struct file *file);

int net_ratelimit(void);
# 320 "../include/linux/net.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sendpage_ok(struct page *page)
{
 return !PageSlab(page) && page_count(page) >= 1;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sendpages_ok(struct page *page, size_t len, size_t offset)
{
 struct page *p = page + (offset >> 14);
 size_t count = 0;

 while (count < len) {
  if (!sendpage_ok(p))
   return false;

  p++;
  count += (1UL << 14);
 }

 return true;
}

int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
     size_t num, size_t len);
int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
     struct kvec *vec, size_t num, size_t len);
int kernel_recvmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
     size_t num, size_t len, int flags);

int kernel_bind(struct socket *sock, struct sockaddr *addr, int addrlen);
int kernel_listen(struct socket *sock, int backlog);
int kernel_accept(struct socket *sock, struct socket **newsock, int flags);
int kernel_connect(struct socket *sock, struct sockaddr *addr, int addrlen,
     int flags);
int kernel_getsockname(struct socket *sock, struct sockaddr *addr);
int kernel_getpeername(struct socket *sock, struct sockaddr *addr);
int kernel_sock_shutdown(struct socket *sock, enum sock_shutdown_cmd how);


u32 kernel_sock_ip_overhead(struct sock *sk);
# 7 "../include/net/scm.h" 2

# 1 "../include/linux/file.h" 1
# 15 "../include/linux/file.h"
struct file;

extern void fput(struct file *);

struct file_operations;
struct task_struct;
struct vfsmount;
struct dentry;
struct inode;
struct path;
extern struct file *alloc_file_pseudo(struct inode *, struct vfsmount *,
 const char *, int flags, const struct file_operations *);
extern struct file *alloc_file_pseudo_noaccount(struct inode *, struct vfsmount *,
 const char *, int flags, const struct file_operations *);
extern struct file *alloc_file_clone(struct file *, int flags,
 const struct file_operations *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fput_light(struct file *file, int fput_needed)
{
 if (fput_needed)
  fput(file);
}

struct fd {
 struct file *file;
 unsigned int flags;
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fdput(struct fd fd)
{
 if (fd.flags & 1)
  fput(fd.file);
}

extern struct file *fget(unsigned int fd);
extern struct file *fget_raw(unsigned int fd);
extern struct file *fget_task(struct task_struct *task, unsigned int fd);
extern unsigned long __fdget(unsigned int fd);
extern unsigned long __fdget_raw(unsigned int fd);
extern unsigned long __fdget_pos(unsigned int fd);
extern void __f_unlock_pos(struct file *);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd __to_fd(unsigned long v)
{
 return (struct fd){(struct file *)(v & ~3),v & 3};
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd fdget(unsigned int fd)
{
 return __to_fd(__fdget(fd));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd fdget_raw(unsigned int fd)
{
 return __to_fd(__fdget_raw(fd));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd fdget_pos(int fd)
{
 return __to_fd(__fdget_pos(fd));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fdput_pos(struct fd f)
{
 if (f.flags & 2)
  __f_unlock_pos(f.file);
 fdput(f);
}

typedef struct fd class_fd_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_fd_destructor(struct fd *p) { struct fd _T = *p; fdput(_T); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd class_fd_constructor(int fd) { struct fd t = fdget(fd); return t; }
typedef struct fd class_fd_raw_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_fd_raw_destructor(struct fd *p) { struct fd _T = *p; fdput(_T); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fd class_fd_raw_constructor(int fd) { struct fd t = fdget_raw(fd); return t; }

extern int f_dupfd(unsigned int from, struct file *file, unsigned flags);
extern int replace_fd(unsigned fd, struct file *file, unsigned flags);
extern void set_close_on_exec(unsigned int fd, int flag);
extern bool get_close_on_exec(unsigned int fd);
extern int __get_unused_fd_flags(unsigned flags, unsigned long nofile);
extern int get_unused_fd_flags(unsigned flags);
extern void put_unused_fd(unsigned int fd);

typedef int class_get_unused_fd_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_get_unused_fd_destructor(int *p) { int _T = *p; if (_T >= 0) put_unused_fd(_T); } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int class_get_unused_fd_constructor(unsigned flags) { int t = get_unused_fd_flags(flags); return t; }
# 120 "../include/linux/file.h"
extern void fd_install(unsigned int fd, struct file *file);

int receive_fd(struct file *file, int *ufd, unsigned int o_flags);

int receive_fd_replace(int new_fd, struct file *file, unsigned int o_flags);

extern void flush_delayed_fput(void);
extern void __fput_sync(struct file *);

extern unsigned int sysctl_nr_open_min, sysctl_nr_open_max;
# 9 "../include/net/scm.h" 2
# 1 "../include/linux/security.h" 1
# 26 "../include/linux/security.h"
# 1 "../include/linux/kernel_read_file.h" 1
# 22 "../include/linux/kernel_read_file.h"
enum kernel_read_file_id {
 READING_UNKNOWN, READING_FIRMWARE, READING_MODULE, READING_KEXEC_IMAGE, READING_KEXEC_INITRAMFS, READING_POLICY, READING_X509_CERTIFICATE, READING_MAX_ID,
};

static const char * const kernel_read_file_str[] = {
 "unknown", "firmware", "kernel-module", "kexec-image", "kexec-initramfs", "security-policy", "x509-certificate", "",
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *kernel_read_file_id_str(enum kernel_read_file_id id)
{
 if ((unsigned int)id >= READING_MAX_ID)
  return kernel_read_file_str[READING_UNKNOWN];

 return kernel_read_file_str[id];
}

ssize_t kernel_read_file(struct file *file, loff_t offset,
    void **buf, size_t buf_size,
    size_t *file_size,
    enum kernel_read_file_id id);
ssize_t kernel_read_file_from_path(const char *path, loff_t offset,
       void **buf, size_t buf_size,
       size_t *file_size,
       enum kernel_read_file_id id);
ssize_t kernel_read_file_from_path_initns(const char *path, loff_t offset,
       void **buf, size_t buf_size,
       size_t *file_size,
       enum kernel_read_file_id id);
ssize_t kernel_read_file_from_fd(int fd, loff_t offset,
     void **buf, size_t buf_size,
     size_t *file_size,
     enum kernel_read_file_id id);
# 27 "../include/linux/security.h" 2








# 1 "../include/linux/bpf.h" 1






# 1 "../include/uapi/linux/bpf.h" 1
# 12 "../include/uapi/linux/bpf.h"
# 1 "../include/uapi/linux/bpf_common.h" 1
# 13 "../include/uapi/linux/bpf.h" 2
# 54 "../include/uapi/linux/bpf.h"
enum bpf_cond_pseudo_jmp {
 BPF_MAY_GOTO = 0,
};


enum {
 BPF_REG_0 = 0,
 BPF_REG_1,
 BPF_REG_2,
 BPF_REG_3,
 BPF_REG_4,
 BPF_REG_5,
 BPF_REG_6,
 BPF_REG_7,
 BPF_REG_8,
 BPF_REG_9,
 BPF_REG_10,
 __MAX_BPF_REG,
};




struct bpf_insn {
 __u8 code;
 __u8 dst_reg:4;
 __u8 src_reg:4;
 __s16 off;
 __s32 imm;
};





struct bpf_lpm_trie_key {
 __u32 prefixlen;
 __u8 data[0];
};


struct bpf_lpm_trie_key_hdr {
 __u32 prefixlen;
};


struct bpf_lpm_trie_key_u8 {
 union {
  struct bpf_lpm_trie_key_hdr hdr;
  __u32 prefixlen;
 };
 __u8 data[];
};

struct bpf_cgroup_storage_key {
 __u64 cgroup_inode_id;
 __u32 attach_type;
};

enum bpf_cgroup_iter_order {
 BPF_CGROUP_ITER_ORDER_UNSPEC = 0,
 BPF_CGROUP_ITER_SELF_ONLY,
 BPF_CGROUP_ITER_DESCENDANTS_PRE,
 BPF_CGROUP_ITER_DESCENDANTS_POST,
 BPF_CGROUP_ITER_ANCESTORS_UP,
};

union bpf_iter_link_info {
 struct {
  __u32 map_fd;
 } map;
 struct {
  enum bpf_cgroup_iter_order order;






  __u32 cgroup_fd;
  __u64 cgroup_id;
 } cgroup;

 struct {
  __u32 tid;
  __u32 pid;
  __u32 pid_fd;
 } task;
};
# 922 "../include/uapi/linux/bpf.h"
enum bpf_cmd {
 BPF_MAP_CREATE,
 BPF_MAP_LOOKUP_ELEM,
 BPF_MAP_UPDATE_ELEM,
 BPF_MAP_DELETE_ELEM,
 BPF_MAP_GET_NEXT_KEY,
 BPF_PROG_LOAD,
 BPF_OBJ_PIN,
 BPF_OBJ_GET,
 BPF_PROG_ATTACH,
 BPF_PROG_DETACH,
 BPF_PROG_TEST_RUN,
 BPF_PROG_RUN = BPF_PROG_TEST_RUN,
 BPF_PROG_GET_NEXT_ID,
 BPF_MAP_GET_NEXT_ID,
 BPF_PROG_GET_FD_BY_ID,
 BPF_MAP_GET_FD_BY_ID,
 BPF_OBJ_GET_INFO_BY_FD,
 BPF_PROG_QUERY,
 BPF_RAW_TRACEPOINT_OPEN,
 BPF_BTF_LOAD,
 BPF_BTF_GET_FD_BY_ID,
 BPF_TASK_FD_QUERY,
 BPF_MAP_LOOKUP_AND_DELETE_ELEM,
 BPF_MAP_FREEZE,
 BPF_BTF_GET_NEXT_ID,
 BPF_MAP_LOOKUP_BATCH,
 BPF_MAP_LOOKUP_AND_DELETE_BATCH,
 BPF_MAP_UPDATE_BATCH,
 BPF_MAP_DELETE_BATCH,
 BPF_LINK_CREATE,
 BPF_LINK_UPDATE,
 BPF_LINK_GET_FD_BY_ID,
 BPF_LINK_GET_NEXT_ID,
 BPF_ENABLE_STATS,
 BPF_ITER_CREATE,
 BPF_LINK_DETACH,
 BPF_PROG_BIND_MAP,
 BPF_TOKEN_CREATE,
 __MAX_BPF_CMD,
};

enum bpf_map_type {
 BPF_MAP_TYPE_UNSPEC,
 BPF_MAP_TYPE_HASH,
 BPF_MAP_TYPE_ARRAY,
 BPF_MAP_TYPE_PROG_ARRAY,
 BPF_MAP_TYPE_PERF_EVENT_ARRAY,
 BPF_MAP_TYPE_PERCPU_HASH,
 BPF_MAP_TYPE_PERCPU_ARRAY,
 BPF_MAP_TYPE_STACK_TRACE,
 BPF_MAP_TYPE_CGROUP_ARRAY,
 BPF_MAP_TYPE_LRU_HASH,
 BPF_MAP_TYPE_LRU_PERCPU_HASH,
 BPF_MAP_TYPE_LPM_TRIE,
 BPF_MAP_TYPE_ARRAY_OF_MAPS,
 BPF_MAP_TYPE_HASH_OF_MAPS,
 BPF_MAP_TYPE_DEVMAP,
 BPF_MAP_TYPE_SOCKMAP,
 BPF_MAP_TYPE_CPUMAP,
 BPF_MAP_TYPE_XSKMAP,
 BPF_MAP_TYPE_SOCKHASH,
 BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,






 BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
 BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
 BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,






 BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE = BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,
 BPF_MAP_TYPE_QUEUE,
 BPF_MAP_TYPE_STACK,
 BPF_MAP_TYPE_SK_STORAGE,
 BPF_MAP_TYPE_DEVMAP_HASH,
 BPF_MAP_TYPE_STRUCT_OPS,
 BPF_MAP_TYPE_RINGBUF,
 BPF_MAP_TYPE_INODE_STORAGE,
 BPF_MAP_TYPE_TASK_STORAGE,
 BPF_MAP_TYPE_BLOOM_FILTER,
 BPF_MAP_TYPE_USER_RINGBUF,
 BPF_MAP_TYPE_CGRP_STORAGE,
 BPF_MAP_TYPE_ARENA,
 __MAX_BPF_MAP_TYPE
};
# 1024 "../include/uapi/linux/bpf.h"
enum bpf_prog_type {
 BPF_PROG_TYPE_UNSPEC,
 BPF_PROG_TYPE_SOCKET_FILTER,
 BPF_PROG_TYPE_KPROBE,
 BPF_PROG_TYPE_SCHED_CLS,
 BPF_PROG_TYPE_SCHED_ACT,
 BPF_PROG_TYPE_TRACEPOINT,
 BPF_PROG_TYPE_XDP,
 BPF_PROG_TYPE_PERF_EVENT,
 BPF_PROG_TYPE_CGROUP_SKB,
 BPF_PROG_TYPE_CGROUP_SOCK,
 BPF_PROG_TYPE_LWT_IN,
 BPF_PROG_TYPE_LWT_OUT,
 BPF_PROG_TYPE_LWT_XMIT,
 BPF_PROG_TYPE_SOCK_OPS,
 BPF_PROG_TYPE_SK_SKB,
 BPF_PROG_TYPE_CGROUP_DEVICE,
 BPF_PROG_TYPE_SK_MSG,
 BPF_PROG_TYPE_RAW_TRACEPOINT,
 BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
 BPF_PROG_TYPE_LWT_SEG6LOCAL,
 BPF_PROG_TYPE_LIRC_MODE2,
 BPF_PROG_TYPE_SK_REUSEPORT,
 BPF_PROG_TYPE_FLOW_DISSECTOR,
 BPF_PROG_TYPE_CGROUP_SYSCTL,
 BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
 BPF_PROG_TYPE_CGROUP_SOCKOPT,
 BPF_PROG_TYPE_TRACING,
 BPF_PROG_TYPE_STRUCT_OPS,
 BPF_PROG_TYPE_EXT,
 BPF_PROG_TYPE_LSM,
 BPF_PROG_TYPE_SK_LOOKUP,
 BPF_PROG_TYPE_SYSCALL,
 BPF_PROG_TYPE_NETFILTER,
 __MAX_BPF_PROG_TYPE
};

enum bpf_attach_type {
 BPF_CGROUP_INET_INGRESS,
 BPF_CGROUP_INET_EGRESS,
 BPF_CGROUP_INET_SOCK_CREATE,
 BPF_CGROUP_SOCK_OPS,
 BPF_SK_SKB_STREAM_PARSER,
 BPF_SK_SKB_STREAM_VERDICT,
 BPF_CGROUP_DEVICE,
 BPF_SK_MSG_VERDICT,
 BPF_CGROUP_INET4_BIND,
 BPF_CGROUP_INET6_BIND,
 BPF_CGROUP_INET4_CONNECT,
 BPF_CGROUP_INET6_CONNECT,
 BPF_CGROUP_INET4_POST_BIND,
 BPF_CGROUP_INET6_POST_BIND,
 BPF_CGROUP_UDP4_SENDMSG,
 BPF_CGROUP_UDP6_SENDMSG,
 BPF_LIRC_MODE2,
 BPF_FLOW_DISSECTOR,
 BPF_CGROUP_SYSCTL,
 BPF_CGROUP_UDP4_RECVMSG,
 BPF_CGROUP_UDP6_RECVMSG,
 BPF_CGROUP_GETSOCKOPT,
 BPF_CGROUP_SETSOCKOPT,
 BPF_TRACE_RAW_TP,
 BPF_TRACE_FENTRY,
 BPF_TRACE_FEXIT,
 BPF_MODIFY_RETURN,
 BPF_LSM_MAC,
 BPF_TRACE_ITER,
 BPF_CGROUP_INET4_GETPEERNAME,
 BPF_CGROUP_INET6_GETPEERNAME,
 BPF_CGROUP_INET4_GETSOCKNAME,
 BPF_CGROUP_INET6_GETSOCKNAME,
 BPF_XDP_DEVMAP,
 BPF_CGROUP_INET_SOCK_RELEASE,
 BPF_XDP_CPUMAP,
 BPF_SK_LOOKUP,
 BPF_XDP,
 BPF_SK_SKB_VERDICT,
 BPF_SK_REUSEPORT_SELECT,
 BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,
 BPF_PERF_EVENT,
 BPF_TRACE_KPROBE_MULTI,
 BPF_LSM_CGROUP,
 BPF_STRUCT_OPS,
 BPF_NETFILTER,
 BPF_TCX_INGRESS,
 BPF_TCX_EGRESS,
 BPF_TRACE_UPROBE_MULTI,
 BPF_CGROUP_UNIX_CONNECT,
 BPF_CGROUP_UNIX_SENDMSG,
 BPF_CGROUP_UNIX_RECVMSG,
 BPF_CGROUP_UNIX_GETPEERNAME,
 BPF_CGROUP_UNIX_GETSOCKNAME,
 BPF_NETKIT_PRIMARY,
 BPF_NETKIT_PEER,
 BPF_TRACE_KPROBE_SESSION,
 __MAX_BPF_ATTACH_TYPE
};



enum bpf_link_type {
 BPF_LINK_TYPE_UNSPEC = 0,
 BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
 BPF_LINK_TYPE_TRACING = 2,
 BPF_LINK_TYPE_CGROUP = 3,
 BPF_LINK_TYPE_ITER = 4,
 BPF_LINK_TYPE_NETNS = 5,
 BPF_LINK_TYPE_XDP = 6,
 BPF_LINK_TYPE_PERF_EVENT = 7,
 BPF_LINK_TYPE_KPROBE_MULTI = 8,
 BPF_LINK_TYPE_STRUCT_OPS = 9,
 BPF_LINK_TYPE_NETFILTER = 10,
 BPF_LINK_TYPE_TCX = 11,
 BPF_LINK_TYPE_UPROBE_MULTI = 12,
 BPF_LINK_TYPE_NETKIT = 13,
 BPF_LINK_TYPE_SOCKMAP = 14,
 __MAX_BPF_LINK_TYPE,
};



enum bpf_perf_event_type {
 BPF_PERF_EVENT_UNSPEC = 0,
 BPF_PERF_EVENT_UPROBE = 1,
 BPF_PERF_EVENT_URETPROBE = 2,
 BPF_PERF_EVENT_KPROBE = 3,
 BPF_PERF_EVENT_KRETPROBE = 4,
 BPF_PERF_EVENT_TRACEPOINT = 5,
 BPF_PERF_EVENT_EVENT = 6,
};
# 1274 "../include/uapi/linux/bpf.h"
enum {
 BPF_F_KPROBE_MULTI_RETURN = (1U << 0)
};




enum {
 BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
};
# 1344 "../include/uapi/linux/bpf.h"
enum bpf_addr_space_cast {
 BPF_ADDR_SPACE_CAST = 1,
};


enum {
 BPF_ANY = 0,
 BPF_NOEXIST = 1,
 BPF_EXIST = 2,
 BPF_F_LOCK = 4,
};


enum {
 BPF_F_NO_PREALLOC = (1U << 0),






 BPF_F_NO_COMMON_LRU = (1U << 1),

 BPF_F_NUMA_NODE = (1U << 2),


 BPF_F_RDONLY = (1U << 3),
 BPF_F_WRONLY = (1U << 4),


 BPF_F_STACK_BUILD_ID = (1U << 5),


 BPF_F_ZERO_SEED = (1U << 6),


 BPF_F_RDONLY_PROG = (1U << 7),
 BPF_F_WRONLY_PROG = (1U << 8),


 BPF_F_CLONE = (1U << 9),


 BPF_F_MMAPABLE = (1U << 10),


 BPF_F_PRESERVE_ELEMS = (1U << 11),


 BPF_F_INNER_MAP = (1U << 12),


 BPF_F_LINK = (1U << 13),


 BPF_F_PATH_FD = (1U << 14),


 BPF_F_VTYPE_BTF_OBJ_FD = (1U << 15),


 BPF_F_TOKEN_FD = (1U << 16),


 BPF_F_SEGV_ON_FAULT = (1U << 17),


 BPF_F_NO_USER_CONV = (1U << 18),
};
# 1432 "../include/uapi/linux/bpf.h"
enum bpf_stats_type {

 BPF_STATS_RUN_TIME = 0,
};

enum bpf_stack_build_id_status {

 BPF_STACK_BUILD_ID_EMPTY = 0,

 BPF_STACK_BUILD_ID_VALID = 1,

 BPF_STACK_BUILD_ID_IP = 2,
};


struct bpf_stack_build_id {
 __s32 status;
 unsigned char build_id[20];
 union {
  __u64 offset;
  __u64 ip;
 };
};



union bpf_attr {
 struct {
  __u32 map_type;
  __u32 key_size;
  __u32 value_size;
  __u32 max_entries;
  __u32 map_flags;


  __u32 inner_map_fd;
  __u32 numa_node;


  char map_name[16U];
  __u32 map_ifindex;
  __u32 btf_fd;
  __u32 btf_key_type_id;
  __u32 btf_value_type_id;
  __u32 btf_vmlinux_value_type_id;
# 1489 "../include/uapi/linux/bpf.h"
  __u64 map_extra;

  __s32 value_type_btf_obj_fd;






  __s32 map_token_fd;
 };

 struct {
  __u32 map_fd;
  __u64 __attribute__((aligned(8))) key;
  union {
   __u64 __attribute__((aligned(8))) value;
   __u64 __attribute__((aligned(8))) next_key;
  };
  __u64 flags;
 };

 struct {
  __u64 __attribute__((aligned(8))) in_batch;


  __u64 __attribute__((aligned(8))) out_batch;
  __u64 __attribute__((aligned(8))) keys;
  __u64 __attribute__((aligned(8))) values;
  __u32 count;




  __u32 map_fd;
  __u64 elem_flags;
  __u64 flags;
 } batch;

 struct {
  __u32 prog_type;
  __u32 insn_cnt;
  __u64 __attribute__((aligned(8))) insns;
  __u64 __attribute__((aligned(8))) license;
  __u32 log_level;
  __u32 log_size;
  __u64 __attribute__((aligned(8))) log_buf;
  __u32 kern_version;
  __u32 prog_flags;
  char prog_name[16U];
  __u32 prog_ifindex;




  __u32 expected_attach_type;
  __u32 prog_btf_fd;
  __u32 func_info_rec_size;
  __u64 __attribute__((aligned(8))) func_info;
  __u32 func_info_cnt;
  __u32 line_info_rec_size;
  __u64 __attribute__((aligned(8))) line_info;
  __u32 line_info_cnt;
  __u32 attach_btf_id;
  union {

   __u32 attach_prog_fd;

   __u32 attach_btf_obj_fd;
  };
  __u32 core_relo_cnt;
  __u64 __attribute__((aligned(8))) fd_array;
  __u64 __attribute__((aligned(8))) core_relos;
  __u32 core_relo_rec_size;




  __u32 log_true_size;



  __s32 prog_token_fd;
 };

 struct {
  __u64 __attribute__((aligned(8))) pathname;
  __u32 bpf_fd;
  __u32 file_flags;






  __s32 path_fd;
 };

 struct {
  union {
   __u32 target_fd;
   __u32 target_ifindex;
  };
  __u32 attach_bpf_fd;
  __u32 attach_type;
  __u32 attach_flags;
  __u32 replace_bpf_fd;
  union {
   __u32 relative_fd;
   __u32 relative_id;
  };
  __u64 expected_revision;
 };

 struct {
  __u32 prog_fd;
  __u32 retval;
  __u32 data_size_in;
  __u32 data_size_out;



  __u64 __attribute__((aligned(8))) data_in;
  __u64 __attribute__((aligned(8))) data_out;
  __u32 repeat;
  __u32 duration;
  __u32 ctx_size_in;
  __u32 ctx_size_out;



  __u64 __attribute__((aligned(8))) ctx_in;
  __u64 __attribute__((aligned(8))) ctx_out;
  __u32 flags;
  __u32 cpu;
  __u32 batch_size;
 } test;

 struct {
  union {
   __u32 start_id;
   __u32 prog_id;
   __u32 map_id;
   __u32 btf_id;
   __u32 link_id;
  };
  __u32 next_id;
  __u32 open_flags;
 };

 struct {
  __u32 bpf_fd;
  __u32 info_len;
  __u64 __attribute__((aligned(8))) info;
 } info;

 struct {
  union {
   __u32 target_fd;
   __u32 target_ifindex;
  };
  __u32 attach_type;
  __u32 query_flags;
  __u32 attach_flags;
  __u64 __attribute__((aligned(8))) prog_ids;
  union {
   __u32 prog_cnt;
   __u32 count;
  };
  __u32 :32;



  __u64 __attribute__((aligned(8))) prog_attach_flags;
  __u64 __attribute__((aligned(8))) link_ids;
  __u64 __attribute__((aligned(8))) link_attach_flags;
  __u64 revision;
 } query;

 struct {
  __u64 name;
  __u32 prog_fd;
  __u32 :32;
  __u64 __attribute__((aligned(8))) cookie;
 } raw_tracepoint;

 struct {
  __u64 __attribute__((aligned(8))) btf;
  __u64 __attribute__((aligned(8))) btf_log_buf;
  __u32 btf_size;
  __u32 btf_log_size;
  __u32 btf_log_level;




  __u32 btf_log_true_size;
  __u32 btf_flags;



  __s32 btf_token_fd;
 };

 struct {
  __u32 pid;
  __u32 fd;
  __u32 flags;
  __u32 buf_len;
  __u64 __attribute__((aligned(8))) buf;




  __u32 prog_id;
  __u32 fd_type;
  __u64 probe_offset;
  __u64 probe_addr;
 } task_fd_query;

 struct {
  union {
   __u32 prog_fd;
   __u32 map_fd;
  };
  union {
   __u32 target_fd;
   __u32 target_ifindex;
  };
  __u32 attach_type;
  __u32 flags;
  union {
   __u32 target_btf_id;
   struct {
    __u64 __attribute__((aligned(8))) iter_info;
    __u32 iter_info_len;
   };
   struct {




    __u64 bpf_cookie;
   } perf_event;
   struct {
    __u32 flags;
    __u32 cnt;
    __u64 __attribute__((aligned(8))) syms;
    __u64 __attribute__((aligned(8))) addrs;
    __u64 __attribute__((aligned(8))) cookies;
   } kprobe_multi;
   struct {

    __u32 target_btf_id;




    __u64 cookie;
   } tracing;
   struct {
    __u32 pf;
    __u32 hooknum;
    __s32 priority;
    __u32 flags;
   } netfilter;
   struct {
    union {
     __u32 relative_fd;
     __u32 relative_id;
    };
    __u64 expected_revision;
   } tcx;
   struct {
    __u64 __attribute__((aligned(8))) path;
    __u64 __attribute__((aligned(8))) offsets;
    __u64 __attribute__((aligned(8))) ref_ctr_offsets;
    __u64 __attribute__((aligned(8))) cookies;
    __u32 cnt;
    __u32 flags;
    __u32 pid;
   } uprobe_multi;
   struct {
    union {
     __u32 relative_fd;
     __u32 relative_id;
    };
    __u64 expected_revision;
   } netkit;
  };
 } link_create;

 struct {
  __u32 link_fd;
  union {

   __u32 new_prog_fd;

   __u32 new_map_fd;
  };
  __u32 flags;
  union {



   __u32 old_prog_fd;



   __u32 old_map_fd;
  };
 } link_update;

 struct {
  __u32 link_fd;
 } link_detach;

 struct {
  __u32 type;
 } enable_stats;

 struct {
  __u32 link_fd;
  __u32 flags;
 } iter_create;

 struct {
  __u32 prog_fd;
  __u32 map_fd;
  __u32 flags;
 } prog_bind_map;

 struct {
  __u32 flags;
  __u32 bpffs_fd;
 } token_create;

} __attribute__((aligned(8)));
# 6021 "../include/uapi/linux/bpf.h"
enum bpf_func_id {
 BPF_FUNC_unspec = 0, BPF_FUNC_map_lookup_elem = 1, BPF_FUNC_map_update_elem = 2, BPF_FUNC_map_delete_elem = 3, BPF_FUNC_probe_read = 4, BPF_FUNC_ktime_get_ns = 5, BPF_FUNC_trace_printk = 6, BPF_FUNC_get_prandom_u32 = 7, BPF_FUNC_get_smp_processor_id = 8, BPF_FUNC_skb_store_bytes = 9, BPF_FUNC_l3_csum_replace = 10, BPF_FUNC_l4_csum_replace = 11, BPF_FUNC_tail_call = 12, BPF_FUNC_clone_redirect = 13, BPF_FUNC_get_current_pid_tgid = 14, BPF_FUNC_get_current_uid_gid = 15, BPF_FUNC_get_current_comm = 16, BPF_FUNC_get_cgroup_classid = 17, BPF_FUNC_skb_vlan_push = 18, BPF_FUNC_skb_vlan_pop = 19, BPF_FUNC_skb_get_tunnel_key = 20, BPF_FUNC_skb_set_tunnel_key = 21, BPF_FUNC_perf_event_read = 22, BPF_FUNC_redirect = 23, BPF_FUNC_get_route_realm = 24, BPF_FUNC_perf_event_output = 25, BPF_FUNC_skb_load_bytes = 26, BPF_FUNC_get_stackid = 27, BPF_FUNC_csum_diff = 28, BPF_FUNC_skb_get_tunnel_opt = 29, BPF_FUNC_skb_set_tunnel_opt = 30, BPF_FUNC_skb_change_proto = 31, BPF_FUNC_skb_change_type = 32, BPF_FUNC_skb_under_cgroup = 33, BPF_FUNC_get_hash_recalc = 34, BPF_FUNC_get_current_task = 35, BPF_FUNC_probe_write_user = 36, BPF_FUNC_current_task_under_cgroup = 37, BPF_FUNC_skb_change_tail = 38, BPF_FUNC_skb_pull_data = 39, BPF_FUNC_csum_update = 40, BPF_FUNC_set_hash_invalid = 41, BPF_FUNC_get_numa_node_id = 42, BPF_FUNC_skb_change_head = 43, BPF_FUNC_xdp_adjust_head = 44, BPF_FUNC_probe_read_str = 45, BPF_FUNC_get_socket_cookie = 46, BPF_FUNC_get_socket_uid = 47, BPF_FUNC_set_hash = 48, BPF_FUNC_setsockopt = 49, BPF_FUNC_skb_adjust_room = 50, BPF_FUNC_redirect_map = 51, BPF_FUNC_sk_redirect_map = 52, BPF_FUNC_sock_map_update = 53, BPF_FUNC_xdp_adjust_meta = 54, BPF_FUNC_perf_event_read_value = 55, BPF_FUNC_perf_prog_read_value = 56, BPF_FUNC_getsockopt = 57, BPF_FUNC_override_return = 58, BPF_FUNC_sock_ops_cb_flags_set = 59, BPF_FUNC_msg_redirect_map = 60, BPF_FUNC_msg_apply_bytes = 61, BPF_FUNC_msg_cork_bytes = 62, BPF_FUNC_msg_pull_data = 63, BPF_FUNC_bind = 64, BPF_FUNC_xdp_adjust_tail = 65, BPF_FUNC_skb_get_xfrm_state = 66, BPF_FUNC_get_stack = 67, BPF_FUNC_skb_load_bytes_relative = 68, BPF_FUNC_fib_lookup = 69, BPF_FUNC_sock_hash_update = 70, BPF_FUNC_msg_redirect_hash = 71, BPF_FUNC_sk_redirect_hash = 72, BPF_FUNC_lwt_push_encap = 73, BPF_FUNC_lwt_seg6_store_bytes = 74, BPF_FUNC_lwt_seg6_adjust_srh = 75, BPF_FUNC_lwt_seg6_action = 76, BPF_FUNC_rc_repeat = 77, BPF_FUNC_rc_keydown = 78, BPF_FUNC_skb_cgroup_id = 79, BPF_FUNC_get_current_cgroup_id = 80, BPF_FUNC_get_local_storage = 81, BPF_FUNC_sk_select_reuseport = 82, BPF_FUNC_skb_ancestor_cgroup_id = 83, BPF_FUNC_sk_lookup_tcp = 84, BPF_FUNC_sk_lookup_udp = 85, BPF_FUNC_sk_release = 86, BPF_FUNC_map_push_elem = 87, BPF_FUNC_map_pop_elem = 88, BPF_FUNC_map_peek_elem = 89, BPF_FUNC_msg_push_data = 90, BPF_FUNC_msg_pop_data = 91, BPF_FUNC_rc_pointer_rel = 92, BPF_FUNC_spin_lock = 93, BPF_FUNC_spin_unlock = 94, BPF_FUNC_sk_fullsock = 95, BPF_FUNC_tcp_sock = 96, BPF_FUNC_skb_ecn_set_ce = 97, BPF_FUNC_get_listener_sock = 98, BPF_FUNC_skc_lookup_tcp = 99, BPF_FUNC_tcp_check_syncookie = 100, BPF_FUNC_sysctl_get_name = 101, BPF_FUNC_sysctl_get_current_value = 102, BPF_FUNC_sysctl_get_new_value = 103, BPF_FUNC_sysctl_set_new_value = 104, BPF_FUNC_strtol = 105, BPF_FUNC_strtoul = 106, BPF_FUNC_sk_storage_get = 107, BPF_FUNC_sk_storage_delete = 108, BPF_FUNC_send_signal = 109, BPF_FUNC_tcp_gen_syncookie = 110, BPF_FUNC_skb_output = 111, BPF_FUNC_probe_read_user = 112, BPF_FUNC_probe_read_kernel = 113, BPF_FUNC_probe_read_user_str = 114, BPF_FUNC_probe_read_kernel_str = 115, BPF_FUNC_tcp_send_ack = 116, BPF_FUNC_send_signal_thread = 117, BPF_FUNC_jiffies64 = 118, BPF_FUNC_read_branch_records = 119, BPF_FUNC_get_ns_current_pid_tgid = 120, BPF_FUNC_xdp_output = 121, BPF_FUNC_get_netns_cookie = 122, BPF_FUNC_get_current_ancestor_cgroup_id = 123, BPF_FUNC_sk_assign = 124, BPF_FUNC_ktime_get_boot_ns = 125, BPF_FUNC_seq_printf = 126, BPF_FUNC_seq_write = 127, BPF_FUNC_sk_cgroup_id = 128, BPF_FUNC_sk_ancestor_cgroup_id = 129, BPF_FUNC_ringbuf_output = 130, BPF_FUNC_ringbuf_reserve = 131, BPF_FUNC_ringbuf_submit = 132, BPF_FUNC_ringbuf_discard = 133, BPF_FUNC_ringbuf_query = 134, BPF_FUNC_csum_level = 135, BPF_FUNC_skc_to_tcp6_sock = 136, BPF_FUNC_skc_to_tcp_sock = 137, BPF_FUNC_skc_to_tcp_timewait_sock = 138, BPF_FUNC_skc_to_tcp_request_sock = 139, BPF_FUNC_skc_to_udp6_sock = 140, BPF_FUNC_get_task_stack = 141, BPF_FUNC_load_hdr_opt = 142, BPF_FUNC_store_hdr_opt = 143, BPF_FUNC_reserve_hdr_opt = 144, BPF_FUNC_inode_storage_get = 145, BPF_FUNC_inode_storage_delete = 146, BPF_FUNC_d_path = 147, BPF_FUNC_copy_from_user = 148, BPF_FUNC_snprintf_btf = 149, BPF_FUNC_seq_printf_btf = 150, BPF_FUNC_skb_cgroup_classid = 151, BPF_FUNC_redirect_neigh = 152, BPF_FUNC_per_cpu_ptr = 153, BPF_FUNC_this_cpu_ptr = 154, BPF_FUNC_redirect_peer = 155, BPF_FUNC_task_storage_get = 156, BPF_FUNC_task_storage_delete = 157, BPF_FUNC_get_current_task_btf = 158, BPF_FUNC_bprm_opts_set = 159, BPF_FUNC_ktime_get_coarse_ns = 160, BPF_FUNC_ima_inode_hash = 161, BPF_FUNC_sock_from_file = 162, BPF_FUNC_check_mtu = 163, BPF_FUNC_for_each_map_elem = 164, BPF_FUNC_snprintf = 165, BPF_FUNC_sys_bpf = 166, BPF_FUNC_btf_find_by_name_kind = 167, BPF_FUNC_sys_close = 168, BPF_FUNC_timer_init = 169, BPF_FUNC_timer_set_callback = 170, BPF_FUNC_timer_start = 171, BPF_FUNC_timer_cancel = 172, BPF_FUNC_get_func_ip = 173, BPF_FUNC_get_attach_cookie = 174, BPF_FUNC_task_pt_regs = 175, BPF_FUNC_get_branch_snapshot = 176, BPF_FUNC_trace_vprintk = 177, BPF_FUNC_skc_to_unix_sock = 178, BPF_FUNC_kallsyms_lookup_name = 179, BPF_FUNC_find_vma = 180, BPF_FUNC_loop = 181, BPF_FUNC_strncmp = 182, BPF_FUNC_get_func_arg = 183, BPF_FUNC_get_func_ret = 184, BPF_FUNC_get_func_arg_cnt = 185, BPF_FUNC_get_retval = 186, BPF_FUNC_set_retval = 187, BPF_FUNC_xdp_get_buff_len = 188, BPF_FUNC_xdp_load_bytes = 189, BPF_FUNC_xdp_store_bytes = 190, BPF_FUNC_copy_from_user_task = 191, BPF_FUNC_skb_set_tstamp = 192, BPF_FUNC_ima_file_hash = 193, BPF_FUNC_kptr_xchg = 194, BPF_FUNC_map_lookup_percpu_elem = 195, BPF_FUNC_skc_to_mptcp_sock = 196, BPF_FUNC_dynptr_from_mem = 197, BPF_FUNC_ringbuf_reserve_dynptr = 198, BPF_FUNC_ringbuf_submit_dynptr = 199, BPF_FUNC_ringbuf_discard_dynptr = 200, BPF_FUNC_dynptr_read = 201, BPF_FUNC_dynptr_write = 202, BPF_FUNC_dynptr_data = 203, BPF_FUNC_tcp_raw_gen_syncookie_ipv4 = 204, BPF_FUNC_tcp_raw_gen_syncookie_ipv6 = 205, BPF_FUNC_tcp_raw_check_syncookie_ipv4 = 206, BPF_FUNC_tcp_raw_check_syncookie_ipv6 = 207, BPF_FUNC_ktime_get_tai_ns = 208, BPF_FUNC_user_ringbuf_drain = 209, BPF_FUNC_cgrp_storage_get = 210, BPF_FUNC_cgrp_storage_delete = 211,
 __BPF_FUNC_MAX_ID,
};





enum {
 BPF_F_RECOMPUTE_CSUM = (1ULL << 0),
 BPF_F_INVALIDATE_HASH = (1ULL << 1),
};




enum {
 BPF_F_HDR_FIELD_MASK = 0xfULL,
};


enum {
 BPF_F_PSEUDO_HDR = (1ULL << 4),
 BPF_F_MARK_MANGLED_0 = (1ULL << 5),
 BPF_F_MARK_ENFORCE = (1ULL << 6),
};


enum {
 BPF_F_INGRESS = (1ULL << 0),
};


enum {
 BPF_F_TUNINFO_IPV6 = (1ULL << 0),
};


enum {
 BPF_F_SKIP_FIELD_MASK = 0xffULL,
 BPF_F_USER_STACK = (1ULL << 8),

 BPF_F_FAST_STACK_CMP = (1ULL << 9),
 BPF_F_REUSE_STACKID = (1ULL << 10),

 BPF_F_USER_BUILD_ID = (1ULL << 11),
};


enum {
 BPF_F_ZERO_CSUM_TX = (1ULL << 1),
 BPF_F_DONT_FRAGMENT = (1ULL << 2),
 BPF_F_SEQ_NUMBER = (1ULL << 3),
 BPF_F_NO_TUNNEL_KEY = (1ULL << 4),
};


enum {
 BPF_F_TUNINFO_FLAGS = (1ULL << 4),
};




enum {
 BPF_F_INDEX_MASK = 0xffffffffULL,
 BPF_F_CURRENT_CPU = BPF_F_INDEX_MASK,

 BPF_F_CTXLEN_MASK = (0xfffffULL << 32),
};


enum {
 BPF_F_CURRENT_NETNS = (-1L),
};


enum {
 BPF_CSUM_LEVEL_QUERY,
 BPF_CSUM_LEVEL_INC,
 BPF_CSUM_LEVEL_DEC,
 BPF_CSUM_LEVEL_RESET,
};


enum {
 BPF_F_ADJ_ROOM_FIXED_GSO = (1ULL << 0),
 BPF_F_ADJ_ROOM_ENCAP_L3_IPV4 = (1ULL << 1),
 BPF_F_ADJ_ROOM_ENCAP_L3_IPV6 = (1ULL << 2),
 BPF_F_ADJ_ROOM_ENCAP_L4_GRE = (1ULL << 3),
 BPF_F_ADJ_ROOM_ENCAP_L4_UDP = (1ULL << 4),
 BPF_F_ADJ_ROOM_NO_CSUM_RESET = (1ULL << 5),
 BPF_F_ADJ_ROOM_ENCAP_L2_ETH = (1ULL << 6),
 BPF_F_ADJ_ROOM_DECAP_L3_IPV4 = (1ULL << 7),
 BPF_F_ADJ_ROOM_DECAP_L3_IPV6 = (1ULL << 8),
};

enum {
 BPF_ADJ_ROOM_ENCAP_L2_MASK = 0xff,
 BPF_ADJ_ROOM_ENCAP_L2_SHIFT = 56,
};






enum {
 BPF_F_SYSCTL_BASE_NAME = (1ULL << 0),
};


enum {
 BPF_LOCAL_STORAGE_GET_F_CREATE = (1ULL << 0),



 BPF_SK_STORAGE_GET_F_CREATE = BPF_LOCAL_STORAGE_GET_F_CREATE,
};


enum {
 BPF_F_GET_BRANCH_RECORDS_SIZE = (1ULL << 0),
};




enum {
 BPF_RB_NO_WAKEUP = (1ULL << 0),
 BPF_RB_FORCE_WAKEUP = (1ULL << 1),
};


enum {
 BPF_RB_AVAIL_DATA = 0,
 BPF_RB_RING_SIZE = 1,
 BPF_RB_CONS_POS = 2,
 BPF_RB_PROD_POS = 3,
};


enum {
 BPF_RINGBUF_BUSY_BIT = (1U << 31),
 BPF_RINGBUF_DISCARD_BIT = (1U << 30),
 BPF_RINGBUF_HDR_SZ = 8,
};


enum {
 BPF_SK_LOOKUP_F_REPLACE = (1ULL << 0),
 BPF_SK_LOOKUP_F_NO_REUSEPORT = (1ULL << 1),
};


enum bpf_adj_room_mode {
 BPF_ADJ_ROOM_NET,
 BPF_ADJ_ROOM_MAC,
};


enum bpf_hdr_start_off {
 BPF_HDR_START_MAC,
 BPF_HDR_START_NET,
};


enum bpf_lwt_encap_mode {
 BPF_LWT_ENCAP_SEG6,
 BPF_LWT_ENCAP_SEG6_INLINE,
 BPF_LWT_ENCAP_IP,
};


enum {
 BPF_F_BPRM_SECUREEXEC = (1ULL << 0),
};


enum {
 BPF_F_BROADCAST = (1ULL << 3),
 BPF_F_EXCLUDE_INGRESS = (1ULL << 4),
};
# 6215 "../include/uapi/linux/bpf.h"
enum {
 BPF_SKB_TSTAMP_UNSPEC = 0,
 BPF_SKB_TSTAMP_DELIVERY_MONO = 1,
 BPF_SKB_CLOCK_REALTIME = 0,
 BPF_SKB_CLOCK_MONOTONIC = 1,
 BPF_SKB_CLOCK_TAI = 2,



};




struct __sk_buff {
 __u32 len;
 __u32 pkt_type;
 __u32 mark;
 __u32 queue_mapping;
 __u32 protocol;
 __u32 vlan_present;
 __u32 vlan_tci;
 __u32 vlan_proto;
 __u32 priority;
 __u32 ingress_ifindex;
 __u32 ifindex;
 __u32 tc_index;
 __u32 cb[5];
 __u32 hash;
 __u32 tc_classid;
 __u32 data;
 __u32 data_end;
 __u32 napi_id;


 __u32 family;
 __u32 remote_ip4;
 __u32 local_ip4;
 __u32 remote_ip6[4];
 __u32 local_ip6[4];
 __u32 remote_port;
 __u32 local_port;


 __u32 data_meta;
 union { struct bpf_flow_keys * flow_keys; __u64 :64; } __attribute__((aligned(8)));
 __u64 tstamp;
 __u32 wire_len;
 __u32 gso_segs;
 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
 __u32 gso_size;
 __u8 tstamp_type;
 __u32 :24;
 __u64 hwtstamp;
};

struct bpf_tunnel_key {
 __u32 tunnel_id;
 union {
  __u32 remote_ipv4;
  __u32 remote_ipv6[4];
 };
 __u8 tunnel_tos;
 __u8 tunnel_ttl;
 union {
  __u16 tunnel_ext;
  __be16 tunnel_flags;
 };
 __u32 tunnel_label;
 union {
  __u32 local_ipv4;
  __u32 local_ipv6[4];
 };
};




struct bpf_xfrm_state {
 __u32 reqid;
 __u32 spi;
 __u16 family;
 __u16 ext;
 union {
  __u32 remote_ipv4;
  __u32 remote_ipv6[4];
 };
};
# 6311 "../include/uapi/linux/bpf.h"
enum bpf_ret_code {
 BPF_OK = 0,

 BPF_DROP = 2,

 BPF_REDIRECT = 7,
# 6325 "../include/uapi/linux/bpf.h"
 BPF_LWT_REROUTE = 128,




 BPF_FLOW_DISSECTOR_CONTINUE = 129,
};

struct bpf_sock {
 __u32 bound_dev_if;
 __u32 family;
 __u32 type;
 __u32 protocol;
 __u32 mark;
 __u32 priority;

 __u32 src_ip4;
 __u32 src_ip6[4];
 __u32 src_port;
 __be16 dst_port;
 __u16 :16;
 __u32 dst_ip4;
 __u32 dst_ip6[4];
 __u32 state;
 __s32 rx_queue_mapping;
};

struct bpf_tcp_sock {
 __u32 snd_cwnd;
 __u32 srtt_us;
 __u32 rtt_min;
 __u32 snd_ssthresh;
 __u32 rcv_nxt;
 __u32 snd_nxt;
 __u32 snd_una;
 __u32 mss_cache;
 __u32 ecn_flags;
 __u32 rate_delivered;
 __u32 rate_interval_us;
 __u32 packets_out;
 __u32 retrans_out;
 __u32 total_retrans;
 __u32 segs_in;


 __u32 data_segs_in;


 __u32 segs_out;


 __u32 data_segs_out;


 __u32 lost_out;
 __u32 sacked_out;
 __u64 bytes_received;



 __u64 bytes_acked;



 __u32 dsack_dups;


 __u32 delivered;
 __u32 delivered_ce;
 __u32 icsk_retransmits;
};

struct bpf_sock_tuple {
 union {
  struct {
   __be32 saddr;
   __be32 daddr;
   __be16 sport;
   __be16 dport;
  } ipv4;
  struct {
   __be32 saddr[4];
   __be32 daddr[4];
   __be16 sport;
   __be16 dport;
  } ipv6;
 };
};







enum tcx_action_base {
 TCX_NEXT = -1,
 TCX_PASS = 0,
 TCX_DROP = 2,
 TCX_REDIRECT = 7,
};

struct bpf_xdp_sock {
 __u32 queue_id;
};
# 6438 "../include/uapi/linux/bpf.h"
enum xdp_action {
 XDP_ABORTED = 0,
 XDP_DROP,
 XDP_PASS,
 XDP_TX,
 XDP_REDIRECT,
};




struct xdp_md {
 __u32 data;
 __u32 data_end;
 __u32 data_meta;

 __u32 ingress_ifindex;
 __u32 rx_queue_index;

 __u32 egress_ifindex;
};






struct bpf_devmap_val {
 __u32 ifindex;
 union {
  int fd;
  __u32 id;
 } bpf_prog;
};






struct bpf_cpumap_val {
 __u32 qsize;
 union {
  int fd;
  __u32 id;
 } bpf_prog;
};

enum sk_action {
 SK_DROP = 0,
 SK_PASS,
};




struct sk_msg_md {
 union { void * data; __u64 :64; } __attribute__((aligned(8)));
 union { void * data_end; __u64 :64; } __attribute__((aligned(8)));

 __u32 family;
 __u32 remote_ip4;
 __u32 local_ip4;
 __u32 remote_ip6[4];
 __u32 local_ip6[4];
 __u32 remote_port;
 __u32 local_port;
 __u32 size;

 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
};

struct sk_reuseport_md {




 union { void * data; __u64 :64; } __attribute__((aligned(8)));

 union { void * data_end; __u64 :64; } __attribute__((aligned(8)));






 __u32 len;




 __u32 eth_protocol;
 __u32 ip_protocol;
 __u32 bind_inany;
 __u32 hash;
# 6545 "../include/uapi/linux/bpf.h"
 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
 union { struct bpf_sock * migrating_sk; __u64 :64; } __attribute__((aligned(8)));
};



struct bpf_prog_info {
 __u32 type;
 __u32 id;
 __u8 tag[8];
 __u32 jited_prog_len;
 __u32 xlated_prog_len;
 __u64 __attribute__((aligned(8))) jited_prog_insns;
 __u64 __attribute__((aligned(8))) xlated_prog_insns;
 __u64 load_time;
 __u32 created_by_uid;
 __u32 nr_map_ids;
 __u64 __attribute__((aligned(8))) map_ids;
 char name[16U];
 __u32 ifindex;
 __u32 gpl_compatible:1;
 __u32 :31;
 __u64 netns_dev;
 __u64 netns_ino;
 __u32 nr_jited_ksyms;
 __u32 nr_jited_func_lens;
 __u64 __attribute__((aligned(8))) jited_ksyms;
 __u64 __attribute__((aligned(8))) jited_func_lens;
 __u32 btf_id;
 __u32 func_info_rec_size;
 __u64 __attribute__((aligned(8))) func_info;
 __u32 nr_func_info;
 __u32 nr_line_info;
 __u64 __attribute__((aligned(8))) line_info;
 __u64 __attribute__((aligned(8))) jited_line_info;
 __u32 nr_jited_line_info;
 __u32 line_info_rec_size;
 __u32 jited_line_info_rec_size;
 __u32 nr_prog_tags;
 __u64 __attribute__((aligned(8))) prog_tags;
 __u64 run_time_ns;
 __u64 run_cnt;
 __u64 recursion_misses;
 __u32 verified_insns;
 __u32 attach_btf_obj_id;
 __u32 attach_btf_id;
} __attribute__((aligned(8)));

struct bpf_map_info {
 __u32 type;
 __u32 id;
 __u32 key_size;
 __u32 value_size;
 __u32 max_entries;
 __u32 map_flags;
 char name[16U];
 __u32 ifindex;
 __u32 btf_vmlinux_value_type_id;
 __u64 netns_dev;
 __u64 netns_ino;
 __u32 btf_id;
 __u32 btf_key_type_id;
 __u32 btf_value_type_id;
 __u32 btf_vmlinux_id;
 __u64 map_extra;
} __attribute__((aligned(8)));

struct bpf_btf_info {
 __u64 __attribute__((aligned(8))) btf;
 __u32 btf_size;
 __u32 id;
 __u64 __attribute__((aligned(8))) name;
 __u32 name_len;
 __u32 kernel_btf;
} __attribute__((aligned(8)));

struct bpf_link_info {
 __u32 type;
 __u32 id;
 __u32 prog_id;
 union {
  struct {
   __u64 __attribute__((aligned(8))) tp_name;
   __u32 tp_name_len;
  } raw_tracepoint;
  struct {
   __u32 attach_type;
   __u32 target_obj_id;
   __u32 target_btf_id;
  } tracing;
  struct {
   __u64 cgroup_id;
   __u32 attach_type;
  } cgroup;
  struct {
   __u64 __attribute__((aligned(8))) target_name;
   __u32 target_name_len;





   union {
    struct {
     __u32 map_id;
    } map;
   };
   union {
    struct {
     __u64 cgroup_id;
     __u32 order;
    } cgroup;
    struct {
     __u32 tid;
     __u32 pid;
    } task;
   };
  } iter;
  struct {
   __u32 netns_ino;
   __u32 attach_type;
  } netns;
  struct {
   __u32 ifindex;
  } xdp;
  struct {
   __u32 map_id;
  } struct_ops;
  struct {
   __u32 pf;
   __u32 hooknum;
   __s32 priority;
   __u32 flags;
  } netfilter;
  struct {
   __u64 __attribute__((aligned(8))) addrs;
   __u32 count;
   __u32 flags;
   __u64 missed;
   __u64 __attribute__((aligned(8))) cookies;
  } kprobe_multi;
  struct {
   __u64 __attribute__((aligned(8))) path;
   __u64 __attribute__((aligned(8))) offsets;
   __u64 __attribute__((aligned(8))) ref_ctr_offsets;
   __u64 __attribute__((aligned(8))) cookies;
   __u32 path_size;
   __u32 count;
   __u32 flags;
   __u32 pid;
  } uprobe_multi;
  struct {
   __u32 type;
   __u32 :32;
   union {
    struct {
     __u64 __attribute__((aligned(8))) file_name;
     __u32 name_len;
     __u32 offset;
     __u64 cookie;
    } uprobe;
    struct {
     __u64 __attribute__((aligned(8))) func_name;
     __u32 name_len;
     __u32 offset;
     __u64 addr;
     __u64 missed;
     __u64 cookie;
    } kprobe;
    struct {
     __u64 __attribute__((aligned(8))) tp_name;
     __u32 name_len;
     __u32 :32;
     __u64 cookie;
    } tracepoint;
    struct {
     __u64 config;
     __u32 type;
     __u32 :32;
     __u64 cookie;
    } event;
   };
  } perf_event;
  struct {
   __u32 ifindex;
   __u32 attach_type;
  } tcx;
  struct {
   __u32 ifindex;
   __u32 attach_type;
  } netkit;
  struct {
   __u32 map_id;
   __u32 attach_type;
  } sockmap;
 };
} __attribute__((aligned(8)));





struct bpf_sock_addr {
 __u32 user_family;
 __u32 user_ip4;


 __u32 user_ip6[4];


 __u32 user_port;


 __u32 family;
 __u32 type;
 __u32 protocol;
 __u32 msg_src_ip4;


 __u32 msg_src_ip6[4];


 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
};







struct bpf_sock_ops {
 __u32 op;
 union {
  __u32 args[4];
  __u32 reply;
  __u32 replylong[4];
 };
 __u32 family;
 __u32 remote_ip4;
 __u32 local_ip4;
 __u32 remote_ip6[4];
 __u32 local_ip6[4];
 __u32 remote_port;
 __u32 local_port;
 __u32 is_fullsock;



 __u32 snd_cwnd;
 __u32 srtt_us;
 __u32 bpf_sock_ops_cb_flags;
 __u32 state;
 __u32 rtt_min;
 __u32 snd_ssthresh;
 __u32 rcv_nxt;
 __u32 snd_nxt;
 __u32 snd_una;
 __u32 mss_cache;
 __u32 ecn_flags;
 __u32 rate_delivered;
 __u32 rate_interval_us;
 __u32 packets_out;
 __u32 retrans_out;
 __u32 total_retrans;
 __u32 segs_in;
 __u32 data_segs_in;
 __u32 segs_out;
 __u32 data_segs_out;
 __u32 lost_out;
 __u32 sacked_out;
 __u32 sk_txhash;
 __u64 bytes_received;
 __u64 bytes_acked;
 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
# 6834 "../include/uapi/linux/bpf.h"
 union { void * skb_data; __u64 :64; } __attribute__((aligned(8)));
 union { void * skb_data_end; __u64 :64; } __attribute__((aligned(8)));
 __u32 skb_len;



 __u32 skb_tcp_flags;
# 6850 "../include/uapi/linux/bpf.h"
 __u64 skb_hwtstamp;
};


enum {
 BPF_SOCK_OPS_RTO_CB_FLAG = (1<<0),
 BPF_SOCK_OPS_RETRANS_CB_FLAG = (1<<1),
 BPF_SOCK_OPS_STATE_CB_FLAG = (1<<2),
 BPF_SOCK_OPS_RTT_CB_FLAG = (1<<3),
# 6877 "../include/uapi/linux/bpf.h"
 BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG = (1<<4),
# 6886 "../include/uapi/linux/bpf.h"
 BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG = (1<<5),
# 6901 "../include/uapi/linux/bpf.h"
 BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG = (1<<6),

 BPF_SOCK_OPS_ALL_CB_FLAGS = 0x7F,
};




enum {
 BPF_SOCK_OPS_VOID,
 BPF_SOCK_OPS_TIMEOUT_INIT,


 BPF_SOCK_OPS_RWND_INIT,



 BPF_SOCK_OPS_TCP_CONNECT_CB,


 BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB,



 BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB,



 BPF_SOCK_OPS_NEEDS_ECN,


 BPF_SOCK_OPS_BASE_RTT,






 BPF_SOCK_OPS_RTO_CB,




 BPF_SOCK_OPS_RETRANS_CB,





 BPF_SOCK_OPS_STATE_CB,



 BPF_SOCK_OPS_TCP_LISTEN_CB,


 BPF_SOCK_OPS_RTT_CB,



 BPF_SOCK_OPS_PARSE_HDR_OPT_CB,
# 6975 "../include/uapi/linux/bpf.h"
 BPF_SOCK_OPS_HDR_OPT_LEN_CB,
# 6992 "../include/uapi/linux/bpf.h"
 BPF_SOCK_OPS_WRITE_HDR_OPT_CB,
# 7018 "../include/uapi/linux/bpf.h"
};






enum {
 BPF_TCP_ESTABLISHED = 1,
 BPF_TCP_SYN_SENT,
 BPF_TCP_SYN_RECV,
 BPF_TCP_FIN_WAIT1,
 BPF_TCP_FIN_WAIT2,
 BPF_TCP_TIME_WAIT,
 BPF_TCP_CLOSE,
 BPF_TCP_CLOSE_WAIT,
 BPF_TCP_LAST_ACK,
 BPF_TCP_LISTEN,
 BPF_TCP_CLOSING,
 BPF_TCP_NEW_SYN_RECV,
 BPF_TCP_BOUND_INACTIVE,

 BPF_TCP_MAX_STATES
};

enum {
 TCP_BPF_IW = 1001,
 TCP_BPF_SNDCWND_CLAMP = 1002,
 TCP_BPF_DELACK_MAX = 1003,
 TCP_BPF_RTO_MIN = 1004,
# 7080 "../include/uapi/linux/bpf.h"
 TCP_BPF_SYN = 1005,
 TCP_BPF_SYN_IP = 1006,
 TCP_BPF_SYN_MAC = 1007,
};

enum {
 BPF_LOAD_HDR_OPT_TCP_SYN = (1ULL << 0),
};




enum {
 BPF_WRITE_HDR_TCP_CURRENT_MSS = 1,






 BPF_WRITE_HDR_TCP_SYNACK_COOKIE = 2,


};

struct bpf_perf_event_value {
 __u64 counter;
 __u64 enabled;
 __u64 running;
};

enum {
 BPF_DEVCG_ACC_MKNOD = (1ULL << 0),
 BPF_DEVCG_ACC_READ = (1ULL << 1),
 BPF_DEVCG_ACC_WRITE = (1ULL << 2),
};

enum {
 BPF_DEVCG_DEV_BLOCK = (1ULL << 0),
 BPF_DEVCG_DEV_CHAR = (1ULL << 1),
};

struct bpf_cgroup_dev_ctx {

 __u32 access_type;
 __u32 major;
 __u32 minor;
};

struct bpf_raw_tracepoint_args {
 __u64 args[0];
};




enum {
 BPF_FIB_LOOKUP_DIRECT = (1U << 0),
 BPF_FIB_LOOKUP_OUTPUT = (1U << 1),
 BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
 BPF_FIB_LOOKUP_TBID = (1U << 3),
 BPF_FIB_LOOKUP_SRC = (1U << 4),
 BPF_FIB_LOOKUP_MARK = (1U << 5),
};

enum {
 BPF_FIB_LKUP_RET_SUCCESS,
 BPF_FIB_LKUP_RET_BLACKHOLE,
 BPF_FIB_LKUP_RET_UNREACHABLE,
 BPF_FIB_LKUP_RET_PROHIBIT,
 BPF_FIB_LKUP_RET_NOT_FWDED,
 BPF_FIB_LKUP_RET_FWD_DISABLED,
 BPF_FIB_LKUP_RET_UNSUPP_LWT,
 BPF_FIB_LKUP_RET_NO_NEIGH,
 BPF_FIB_LKUP_RET_FRAG_NEEDED,
 BPF_FIB_LKUP_RET_NO_SRC_ADDR,
};

struct bpf_fib_lookup {



 __u8 family;


 __u8 l4_protocol;
 __be16 sport;
 __be16 dport;

 union {

  __u16 tot_len;


  __u16 mtu_result;
 } __attribute__((packed, aligned(2)));



 __u32 ifindex;

 union {

  __u8 tos;
  __be32 flowinfo;


  __u32 rt_metric;
 };




 union {
  __be32 ipv4_src;
  __u32 ipv6_src[4];
 };





 union {
  __be32 ipv4_dst;
  __u32 ipv6_dst[4];
 };

 union {
  struct {

   __be16 h_vlan_proto;
   __be16 h_vlan_TCI;
  };




  __u32 tbid;
 };

 union {

  struct {
   __u32 mark;

  };


  struct {
   __u8 smac[6];
   __u8 dmac[6];
  };
 };
};

struct bpf_redir_neigh {

 __u32 nh_family;

 union {
  __be32 ipv4_nh;
  __u32 ipv6_nh[4];
 };
};


enum bpf_check_mtu_flags {
 BPF_MTU_CHK_SEGS = (1U << 0),
};

enum bpf_check_mtu_ret {
 BPF_MTU_CHK_RET_SUCCESS,
 BPF_MTU_CHK_RET_FRAG_NEEDED,
 BPF_MTU_CHK_RET_SEGS_TOOBIG,
};

enum bpf_task_fd_type {
 BPF_FD_TYPE_RAW_TRACEPOINT,
 BPF_FD_TYPE_TRACEPOINT,
 BPF_FD_TYPE_KPROBE,
 BPF_FD_TYPE_KRETPROBE,
 BPF_FD_TYPE_UPROBE,
 BPF_FD_TYPE_URETPROBE,
};

enum {
 BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG = (1U << 0),
 BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL = (1U << 1),
 BPF_FLOW_DISSECTOR_F_STOP_AT_ENCAP = (1U << 2),
};

struct bpf_flow_keys {
 __u16 nhoff;
 __u16 thoff;
 __u16 addr_proto;
 __u8 is_frag;
 __u8 is_first_frag;
 __u8 is_encap;
 __u8 ip_proto;
 __be16 n_proto;
 __be16 sport;
 __be16 dport;
 union {
  struct {
   __be32 ipv4_src;
   __be32 ipv4_dst;
  };
  struct {
   __u32 ipv6_src[4];
   __u32 ipv6_dst[4];
  };
 };
 __u32 flags;
 __be32 flow_label;
};

struct bpf_func_info {
 __u32 insn_off;
 __u32 type_id;
};




struct bpf_line_info {
 __u32 insn_off;
 __u32 file_name_off;
 __u32 line_off;
 __u32 line_col;
};

struct bpf_spin_lock {
 __u32 val;
};

struct bpf_timer {
 __u64 __opaque[2];
} __attribute__((aligned(8)));

struct bpf_wq {
 __u64 __opaque[2];
} __attribute__((aligned(8)));

struct bpf_dynptr {
 __u64 __opaque[2];
} __attribute__((aligned(8)));

struct bpf_list_head {
 __u64 __opaque[2];
} __attribute__((aligned(8)));

struct bpf_list_node {
 __u64 __opaque[3];
} __attribute__((aligned(8)));

struct bpf_rb_root {
 __u64 __opaque[2];
} __attribute__((aligned(8)));

struct bpf_rb_node {
 __u64 __opaque[4];
} __attribute__((aligned(8)));

struct bpf_refcount {
 __u32 __opaque[1];
} __attribute__((aligned(4)));

struct bpf_sysctl {
 __u32 write;


 __u32 file_pos;


};

struct bpf_sockopt {
 union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
 union { void * optval; __u64 :64; } __attribute__((aligned(8)));
 union { void * optval_end; __u64 :64; } __attribute__((aligned(8)));

 __s32 level;
 __s32 optname;
 __s32 optlen;
 __s32 retval;
};

struct bpf_pidns_info {
 __u32 pid;
 __u32 tgid;
};


struct bpf_sk_lookup {
 union {
  union { struct bpf_sock * sk; __u64 :64; } __attribute__((aligned(8)));
  __u64 cookie;
 };

 __u32 family;
 __u32 protocol;
 __u32 remote_ip4;
 __u32 remote_ip6[4];
 __be16 remote_port;
 __u16 :16;
 __u32 local_ip4;
 __u32 local_ip6[4];
 __u32 local_port;
 __u32 ingress_ifindex;
};
# 7399 "../include/uapi/linux/bpf.h"
struct btf_ptr {
 void *ptr;
 __u32 type_id;
 __u32 flags;
};
# 7414 "../include/uapi/linux/bpf.h"
enum {
 BTF_F_COMPACT = (1ULL << 0),
 BTF_F_NONAME = (1ULL << 1),
 BTF_F_PTR_RAW = (1ULL << 2),
 BTF_F_ZERO = (1ULL << 3),
};





enum bpf_core_relo_kind {
 BPF_CORE_FIELD_BYTE_OFFSET = 0,
 BPF_CORE_FIELD_BYTE_SIZE = 1,
 BPF_CORE_FIELD_EXISTS = 2,
 BPF_CORE_FIELD_SIGNED = 3,
 BPF_CORE_FIELD_LSHIFT_U64 = 4,
 BPF_CORE_FIELD_RSHIFT_U64 = 5,
 BPF_CORE_TYPE_ID_LOCAL = 6,
 BPF_CORE_TYPE_ID_TARGET = 7,
 BPF_CORE_TYPE_EXISTS = 8,
 BPF_CORE_TYPE_SIZE = 9,
 BPF_CORE_ENUMVAL_EXISTS = 10,
 BPF_CORE_ENUMVAL_VALUE = 11,
 BPF_CORE_TYPE_MATCHES = 12,
};
# 7489 "../include/uapi/linux/bpf.h"
struct bpf_core_relo {
 __u32 insn_off;
 __u32 type_id;
 __u32 access_str_off;
 enum bpf_core_relo_kind kind;
};







enum {
 BPF_F_TIMER_ABS = (1ULL << 0),
 BPF_F_TIMER_CPU_PIN = (1ULL << 1),
};


struct bpf_iter_num {



 __u64 __opaque[1];
} __attribute__((aligned(8)));
# 8 "../include/linux/bpf.h" 2
# 1 "../include/uapi/linux/filter.h" 1
# 24 "../include/uapi/linux/filter.h"
struct sock_filter {
 __u16 code;
 __u8 jt;
 __u8 jf;
 __u32 k;
};

struct sock_fprog {
 unsigned short len;
 struct sock_filter *filter;
};
# 9 "../include/linux/bpf.h" 2
# 21 "../include/linux/bpf.h"
# 1 "../include/linux/kallsyms.h" 1
# 16 "../include/linux/kallsyms.h"
# 1 "./arch/hexagon/include/generated/asm/sections.h" 1
# 17 "../include/linux/kallsyms.h" 2







struct cred;
struct module;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_kernel_text(unsigned long addr)
{
 if (__is_kernel_text(addr))
  return 1;
 return in_gate_area_no_mm(addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_kernel(unsigned long addr)
{
 if (__is_kernel(addr))
  return 1;
 return in_gate_area_no_mm(addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int is_ksym_addr(unsigned long addr)
{
 if (1)
  return is_kernel(addr);

 return is_kernel_text(addr) || is_kernel_inittext(addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *dereference_symbol_descriptor(void *ptr)
{
# 65 "../include/linux/kallsyms.h"
 return ptr;
}


extern bool kallsyms_show_value(const struct cred *cred);


unsigned long kallsyms_sym_address(int idx);
int kallsyms_on_each_symbol(int (*fn)(void *, const char *, unsigned long),
       void *data);
int kallsyms_on_each_match_symbol(int (*fn)(void *, unsigned long),
      const char *name, void *data);


unsigned long kallsyms_lookup_name(const char *name);

extern int kallsyms_lookup_size_offset(unsigned long addr,
      unsigned long *symbolsize,
      unsigned long *offset);


const char *kallsyms_lookup(unsigned long addr,
       unsigned long *symbolsize,
       unsigned long *offset,
       char **modname, char *namebuf);


extern int sprint_symbol(char *buffer, unsigned long address);
extern int sprint_symbol_build_id(char *buffer, unsigned long address);
extern int sprint_symbol_no_offset(char *buffer, unsigned long address);
extern int sprint_backtrace(char *buffer, unsigned long address);
extern int sprint_backtrace_build_id(char *buffer, unsigned long address);

int lookup_symbol_name(unsigned long addr, char *symname);
# 170 "../include/linux/kallsyms.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void print_ip_sym(const char *loglvl, unsigned long ip)
{
 ({ do {} while (0); _printk("%s[<%px>] %pS\n", loglvl, (void *) ip, (void *) ip); });
}
# 22 "../include/linux/bpf.h" 2





# 1 "../include/linux/bpfptr.h" 1








typedef sockptr_t bpfptr_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpfptr_is_kernel(bpfptr_t bpfptr)
{
 return bpfptr.is_kernel;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bpfptr_t KERNEL_BPFPTR(void *p)
{
 return (bpfptr_t) { .kernel = p, .is_kernel = true };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bpfptr_t USER_BPFPTR(void *p)
{
 return (bpfptr_t) { .user = p };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bpfptr_t make_bpfptr(u64 addr, bool is_kernel)
{
 if (is_kernel)
  return KERNEL_BPFPTR((void*) (uintptr_t) addr);
 else
  return USER_BPFPTR(( { ({ u64 __dummy; typeof((addr)) __dummy2; (void)(&__dummy == &__dummy2); 1; }); (void *)(uintptr_t)(addr); } ));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpfptr_is_null(bpfptr_t bpfptr)
{
 if (bpfptr_is_kernel(bpfptr))
  return !bpfptr.kernel;
 return !bpfptr.user;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpfptr_add(bpfptr_t *bpfptr, size_t val)
{
 if (bpfptr_is_kernel(*bpfptr))
  bpfptr->kernel += val;
 else
  bpfptr->user += val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_from_bpfptr_offset(void *dst, bpfptr_t src,
       size_t offset, size_t size)
{
 if (!bpfptr_is_kernel(src))
  return copy_from_user(dst, src.user + offset, size);
 return copy_from_kernel_nofault(dst, src.kernel + offset, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size)
{
 return copy_from_bpfptr_offset(dst, src, 0, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int copy_to_bpfptr_offset(bpfptr_t dst, size_t offset,
     const void *src, size_t size)
{
 return copy_to_sockptr_offset((sockptr_t) dst, offset, src, size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *kvmemdup_bpfptr_noprof(bpfptr_t src, size_t len)
{
 void *p = __kvmalloc_node_noprof((len), ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))) | (( gfp_t)((((1UL))) << (___GFP_HARDWALL_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT))), (-1));

 if (!p)
  return ERR_PTR(-12);
 if (copy_from_bpfptr(p, src, len)) {
  kvfree(p);
  return ERR_PTR(-14);
 }
 return p;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count)
{
 if (bpfptr_is_kernel(src))
  return strncpy_from_kernel_nofault(dst, src.kernel, count);
 return strncpy_from_user(dst, src.user, count);
}
# 28 "../include/linux/bpf.h" 2
# 1 "../include/linux/btf.h" 1








# 1 "../include/linux/bsearch.h" 1






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__))
void *__inline_bsearch(const void *key, const void *base, size_t num, size_t size, cmp_func_t cmp)
{
 const char *pivot;
 int result;

 while (num > 0) {
  pivot = base + (num >> 1) * size;
  result = cmp(key, pivot);

  if (result == 0)
   return (void *)pivot;

  if (result > 0) {
   base = pivot + size;
   num--;
  }
  num >>= 1;
 }

 return ((void *)0);
}

extern void *bsearch(const void *key, const void *base, size_t num, size_t size, cmp_func_t cmp);
# 10 "../include/linux/btf.h" 2
# 1 "../include/linux/btf_ids.h" 1







struct btf_id_set {
 u32 cnt;
 u32 ids[];
};




struct btf_id_set8 {
 u32 cnt;
 u32 flags;
 struct {
  u32 id;
  u32 flags;
 } pairs[];
};
# 260 "../include/linux/btf_ids.h"
enum {

BTF_SOCK_TYPE_INET, BTF_SOCK_TYPE_INET_CONN, BTF_SOCK_TYPE_INET_REQ, BTF_SOCK_TYPE_INET_TW, BTF_SOCK_TYPE_REQ, BTF_SOCK_TYPE_SOCK, BTF_SOCK_TYPE_SOCK_COMMON, BTF_SOCK_TYPE_TCP, BTF_SOCK_TYPE_TCP_REQ, BTF_SOCK_TYPE_TCP_TW, BTF_SOCK_TYPE_TCP6, BTF_SOCK_TYPE_UDP, BTF_SOCK_TYPE_UDP6, BTF_SOCK_TYPE_UNIX, BTF_SOCK_TYPE_MPTCP, BTF_SOCK_TYPE_SOCKET,

MAX_BTF_SOCK_TYPE,
};

extern u32 btf_sock_ids[];







enum {

BTF_TRACING_TYPE_TASK, BTF_TRACING_TYPE_FILE, BTF_TRACING_TYPE_VMA,

MAX_BTF_TRACING_TYPE,
};

extern u32 btf_tracing_ids[];
extern u32 bpf_cgroup_btf_id[];
extern u32 bpf_local_storage_map_btf_id[];
extern u32 btf_bpf_map_id[];
# 11 "../include/linux/btf.h" 2
# 1 "../include/uapi/linux/btf.h" 1
# 11 "../include/uapi/linux/btf.h"
struct btf_header {
 __u16 magic;
 __u8 version;
 __u8 flags;
 __u32 hdr_len;


 __u32 type_off;
 __u32 type_len;
 __u32 str_off;
 __u32 str_len;
};
# 31 "../include/uapi/linux/btf.h"
struct btf_type {
 __u32 name_off;
# 41 "../include/uapi/linux/btf.h"
 __u32 info;







 union {
  __u32 size;
  __u32 type;
 };
};





enum {
 BTF_KIND_UNKN = 0,
 BTF_KIND_INT = 1,
 BTF_KIND_PTR = 2,
 BTF_KIND_ARRAY = 3,
 BTF_KIND_STRUCT = 4,
 BTF_KIND_UNION = 5,
 BTF_KIND_ENUM = 6,
 BTF_KIND_FWD = 7,
 BTF_KIND_TYPEDEF = 8,
 BTF_KIND_VOLATILE = 9,
 BTF_KIND_CONST = 10,
 BTF_KIND_RESTRICT = 11,
 BTF_KIND_FUNC = 12,
 BTF_KIND_FUNC_PROTO = 13,
 BTF_KIND_VAR = 14,
 BTF_KIND_DATASEC = 15,
 BTF_KIND_FLOAT = 16,
 BTF_KIND_DECL_TAG = 17,
 BTF_KIND_TYPE_TAG = 18,
 BTF_KIND_ENUM64 = 19,

 NR_BTF_KINDS,
 BTF_KIND_MAX = NR_BTF_KINDS - 1,
};
# 105 "../include/uapi/linux/btf.h"
struct btf_enum {
 __u32 name_off;
 __s32 val;
};


struct btf_array {
 __u32 type;
 __u32 index_type;
 __u32 nelems;
};






struct btf_member {
 __u32 name_off;
 __u32 type;






 __u32 offset;
};
# 145 "../include/uapi/linux/btf.h"
struct btf_param {
 __u32 name_off;
 __u32 type;
};

enum {
 BTF_VAR_STATIC = 0,
 BTF_VAR_GLOBAL_ALLOCATED = 1,
 BTF_VAR_GLOBAL_EXTERN = 2,
};

enum btf_func_linkage {
 BTF_FUNC_STATIC = 0,
 BTF_FUNC_GLOBAL = 1,
 BTF_FUNC_EXTERN = 2,
};




struct btf_var {
 __u32 linkage;
};





struct btf_var_secinfo {
 __u32 type;
 __u32 offset;
 __u32 size;
};
# 186 "../include/uapi/linux/btf.h"
struct btf_decl_tag {
       __s32 component_idx;
};





struct btf_enum64 {
 __u32 name_off;
 __u32 val_lo32;
 __u32 val_hi32;
};
# 12 "../include/linux/btf.h" 2
# 107 "../include/linux/btf.h"
struct btf;
struct btf_member;
struct btf_type;
union bpf_attr;
struct btf_show;
struct btf_id_set;
struct bpf_prog;

typedef int (*btf_kfunc_filter_t)(const struct bpf_prog *prog, u32 kfunc_id);

struct btf_kfunc_id_set {
 struct module *owner;
 struct btf_id_set8 *set;
 btf_kfunc_filter_t filter;
};

struct btf_id_dtor_kfunc {
 u32 btf_id;
 u32 kfunc_btf_id;
};

struct btf_struct_meta {
 u32 btf_id;
 struct btf_record *record;
};

struct btf_struct_metas {
 u32 cnt;
 struct btf_struct_meta types[];
};

extern const struct file_operations btf_fops;

const char *btf_get_name(const struct btf *btf);
void btf_get(struct btf *btf);
void btf_put(struct btf *btf);
const struct btf_header *btf_header(const struct btf *btf);
int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz);
struct btf *btf_get_by_fd(int fd);
int btf_get_info_by_fd(const struct btf *btf,
         const union bpf_attr *attr,
         union bpf_attr *uattr);
# 170 "../include/linux/btf.h"
const struct btf_type *btf_type_id_size(const struct btf *btf,
     u32 *type_id,
     u32 *ret_size);
# 191 "../include/linux/btf.h"
void btf_type_seq_show(const struct btf *btf, u32 type_id, void *obj,
         struct seq_file *m);
int btf_type_seq_show_flags(const struct btf *btf, u32 type_id, void *obj,
       struct seq_file *m, u64 flags);
# 209 "../include/linux/btf.h"
int btf_type_snprintf_show(const struct btf *btf, u32 type_id, void *obj,
      char *buf, int len, u64 flags);

int btf_get_fd_by_id(u32 id);
u32 btf_obj_id(const struct btf *btf);
bool btf_is_kernel(const struct btf *btf);
bool btf_is_module(const struct btf *btf);
bool btf_is_vmlinux(const struct btf *btf);
struct module *btf_try_get_module(const struct btf *btf);
u32 btf_nr_types(const struct btf *btf);
struct btf *btf_base_btf(const struct btf *btf);
bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
      const struct btf_member *m,
      u32 expected_offset, u32 expected_size);
struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t,
        u32 field_mask, u32 value_size);
int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec);
bool btf_type_is_void(const struct btf_type *t);
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind);
s32 bpf_find_btf_id(const char *name, u32 kind, struct btf **btf_p);
const struct btf_type *btf_type_skip_modifiers(const struct btf *btf,
            u32 id, u32 *res_id);
const struct btf_type *btf_type_resolve_ptr(const struct btf *btf,
         u32 id, u32 *res_id);
const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
       u32 id, u32 *res_id);
const struct btf_type *
btf_resolve_size(const struct btf *btf, const struct btf_type *type,
   u32 *type_size);
const char *btf_type_str(const struct btf_type *t);
# 250 "../include/linux/btf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_ptr(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_PTR;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_int(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_INT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_small_int(const struct btf_type *t)
{
 return btf_type_is_int(t) && t->size <= sizeof(u64);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 btf_int_encoding(const struct btf_type *t)
{
 return (((*(u32 *)(t + 1)) & 0x0f000000) >> 24);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_signed_int(const struct btf_type *t)
{
 return btf_type_is_int(t) && (btf_int_encoding(t) & (1 << 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_enum(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_ENUM;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_any_enum(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_ENUM ||
        (((t->info) >> 24) & 0x1f) == BTF_KIND_ENUM64;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_kind_core_compat(const struct btf_type *t1,
     const struct btf_type *t2)
{
 return (((t1->info) >> 24) & 0x1f) == (((t2->info) >> 24) & 0x1f) ||
        (btf_is_any_enum(t1) && btf_is_any_enum(t2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool str_is_empty(const char *s)
{
 return !s || !s[0];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 btf_kind(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_enum(const struct btf_type *t)
{
 return btf_kind(t) == BTF_KIND_ENUM;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_enum64(const struct btf_type *t)
{
 return btf_kind(t) == BTF_KIND_ENUM64;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 btf_enum64_value(const struct btf_enum64 *e)
{
 return ((u64)e->val_hi32 << 32) | e->val_lo32;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_composite(const struct btf_type *t)
{
 u16 kind = btf_kind(t);

 return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_array(const struct btf_type *t)
{
 return btf_kind(t) == BTF_KIND_ARRAY;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_int(const struct btf_type *t)
{
 return btf_kind(t) == BTF_KIND_INT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_is_ptr(const struct btf_type *t)
{
 return btf_kind(t) == BTF_KIND_PTR;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 btf_int_offset(const struct btf_type *t)
{
 return (((*(u32 *)(t + 1)) & 0x00ff0000) >> 16);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 btf_int_bits(const struct btf_type *t)
{
 return ((*(__u32 *)(t + 1)) & 0x000000ff);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_scalar(const struct btf_type *t)
{
 return btf_type_is_int(t) || btf_type_is_enum(t);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_typedef(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_TYPEDEF;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_volatile(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_VOLATILE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_func(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_FUNC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_func_proto(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_FUNC_PROTO;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_var(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_VAR;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_type_tag(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_TYPE_TAG;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_struct(const struct btf_type *t)
{
 u8 kind = (((t->info) >> 24) & 0x1f);

 return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __btf_type_is_struct(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_STRUCT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_array(const struct btf_type *t)
{
 return (((t->info) >> 24) & 0x1f) == BTF_KIND_ARRAY;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 btf_type_vlen(const struct btf_type *t)
{
 return ((t->info) & 0xffff);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 btf_vlen(const struct btf_type *t)
{
 return btf_type_vlen(t);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 btf_func_linkage(const struct btf_type *t)
{
 return ((t->info) & 0xffff);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_kflag(const struct btf_type *t)
{
 return ((t->info) >> 31);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __btf_member_bit_offset(const struct btf_type *struct_type,
       const struct btf_member *member)
{
 return btf_type_kflag(struct_type) ? ((member->offset) & 0xffffff)
        : member->offset;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __btf_member_bitfield_size(const struct btf_type *struct_type,
          const struct btf_member *member)
{
 return btf_type_kflag(struct_type) ? ((member->offset) >> 24)
        : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_member *btf_members(const struct btf_type *t)
{
 return (struct btf_member *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 btf_member_bit_offset(const struct btf_type *t, u32 member_idx)
{
 const struct btf_member *m = btf_members(t) + member_idx;

 return __btf_member_bit_offset(t, m);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 btf_member_bitfield_size(const struct btf_type *t, u32 member_idx)
{
 const struct btf_member *m = btf_members(t) + member_idx;

 return __btf_member_bitfield_size(t, m);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct btf_member *btf_type_member(const struct btf_type *t)
{
 return (const struct btf_member *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_array *btf_array(const struct btf_type *t)
{
 return (struct btf_array *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_enum *btf_enum(const struct btf_type *t)
{
 return (struct btf_enum *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_enum64 *btf_enum64(const struct btf_type *t)
{
 return (struct btf_enum64 *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct btf_var_secinfo *btf_type_var_secinfo(
  const struct btf_type *t)
{
 return (const struct btf_var_secinfo *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_param *btf_params(const struct btf_type *t)
{
 return (struct btf_param *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
{
 return (struct btf_decl_tag *)(t + 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int btf_id_cmp_func(const void *a, const void *b)
{
 const int *pa = a, *pb = b;

 return *pa - *pb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_id_set_contains(const struct btf_id_set *set, u32 id)
{
 return bsearch(&id, set->ids, set->cnt, sizeof(u32), btf_id_cmp_func) != ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id)
{
 return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func);
}

bool btf_param_match_suffix(const struct btf *btf,
       const struct btf_param *arg,
       const char *suffix);
int btf_ctx_arg_offset(const struct btf *btf, const struct btf_type *func_proto,
         u32 arg_no);

struct bpf_verifier_log;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id)
{
 return ((void *)0);
}


enum btf_field_iter_kind {
 BTF_FIELD_ITER_IDS,
 BTF_FIELD_ITER_STRS,
};

struct btf_field_desc {

 int t_off_cnt, t_offs[2];

 int m_sz;

 int m_off_cnt, m_offs[1];
};

struct btf_field_iter {
 struct btf_field_desc desc;
 void *p;
 int m_idx;
 int off_idx;
 int vlen;
};


const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **map_ids);
int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
   enum btf_field_iter_kind iter_kind);
__u32 *btf_field_iter_next(struct btf_field_iter *it);

const char *btf_name_by_offset(const struct btf *btf, u32 offset);
const char *btf_str_by_offset(const struct btf *btf, u32 offset);
struct btf *btf_parse_vmlinux(void);
struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
          const struct bpf_prog *prog);
u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
    const struct bpf_prog *prog);
int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
         const struct btf_kfunc_id_set *s);
int register_btf_fmodret_id_set(const struct btf_kfunc_id_set *kset);
s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id);
int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_cnt,
    struct module *owner);
struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf, u32 btf_id);
bool btf_is_projection_of(const char *pname, const char *tname);
bool btf_is_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
      const struct btf_type *t, enum bpf_prog_type prog_type,
      int arg);
int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
bool btf_types_are_same(const struct btf *btf1, u32 id1,
   const struct btf *btf2, u32 id2);
# 659 "../include/linux/btf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type *t)
{
 if (!btf_type_is_ptr(t))
  return false;

 t = btf_type_skip_modifiers(btf, t->type, ((void *)0));

 return btf_type_is_struct(t);
}
# 29 "../include/linux/bpf.h" 2
# 1 "../include/linux/rcupdate_trace.h" 1
# 14 "../include/linux/rcupdate_trace.h"
extern struct lockdep_map rcu_trace_lock_map;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rcu_read_lock_trace_held(void)
{
 return lock_is_held(&rcu_trace_lock_map);
}
# 34 "../include/linux/rcupdate_trace.h"
void rcu_read_unlock_trace_special(struct task_struct *t);
# 48 "../include/linux/rcupdate_trace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_lock_trace(void)
{
 struct task_struct *t = (__current_thread_info->task);

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_300(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_nesting) == sizeof(char) || sizeof(t->trc_reader_nesting) == sizeof(short) || sizeof(t->trc_reader_nesting) == sizeof(int) || sizeof(t->trc_reader_nesting) == sizeof(long)) || sizeof(t->trc_reader_nesting) == sizeof(long long))) __compiletime_assert_300(); } while (0); do { *(volatile typeof(t->trc_reader_nesting) *)&(t->trc_reader_nesting) = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_299(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_nesting) == sizeof(char) || sizeof(t->trc_reader_nesting) == sizeof(short) || sizeof(t->trc_reader_nesting) == sizeof(int) || sizeof(t->trc_reader_nesting) == sizeof(long)) || sizeof(t->trc_reader_nesting) == sizeof(long long))) __compiletime_assert_299(); } while (0); (*(const volatile typeof( _Generic((t->trc_reader_nesting), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (t->trc_reader_nesting))) *)&(t->trc_reader_nesting)); }) + 1); } while (0); } while (0);
 __asm__ __volatile__("": : :"memory");
 if (0 &&
     t->trc_reader_special.b.need_mb)
  __asm__ __volatile__("": : :"memory");
 rcu_lock_acquire(&rcu_trace_lock_map);
}
# 69 "../include/linux/rcupdate_trace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcu_read_unlock_trace(void)
{
 int nesting;
 struct task_struct *t = (__current_thread_info->task);

 rcu_lock_release(&rcu_trace_lock_map);
 nesting = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_301(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_nesting) == sizeof(char) || sizeof(t->trc_reader_nesting) == sizeof(short) || sizeof(t->trc_reader_nesting) == sizeof(int) || sizeof(t->trc_reader_nesting) == sizeof(long)) || sizeof(t->trc_reader_nesting) == sizeof(long long))) __compiletime_assert_301(); } while (0); (*(const volatile typeof( _Generic((t->trc_reader_nesting), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (t->trc_reader_nesting))) *)&(t->trc_reader_nesting)); }) - 1;
 __asm__ __volatile__("": : :"memory");

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_302(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_nesting) == sizeof(char) || sizeof(t->trc_reader_nesting) == sizeof(short) || sizeof(t->trc_reader_nesting) == sizeof(int) || sizeof(t->trc_reader_nesting) == sizeof(long)) || sizeof(t->trc_reader_nesting) == sizeof(long long))) __compiletime_assert_302(); } while (0); do { *(volatile typeof(t->trc_reader_nesting) *)&(t->trc_reader_nesting) = ((-((int)(~0U >> 1)) - 1) + nesting); } while (0); } while (0);
 if (__builtin_expect(!!(!({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_303(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_special.s) == sizeof(char) || sizeof(t->trc_reader_special.s) == sizeof(short) || sizeof(t->trc_reader_special.s) == sizeof(int) || sizeof(t->trc_reader_special.s) == sizeof(long)) || sizeof(t->trc_reader_special.s) == sizeof(long long))) __compiletime_assert_303(); } while (0); (*(const volatile typeof( _Generic((t->trc_reader_special.s), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (t->trc_reader_special.s))) *)&(t->trc_reader_special.s)); })), 1) || nesting) {
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_304(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(t->trc_reader_nesting) == sizeof(char) || sizeof(t->trc_reader_nesting) == sizeof(short) || sizeof(t->trc_reader_nesting) == sizeof(int) || sizeof(t->trc_reader_nesting) == sizeof(long)) || sizeof(t->trc_reader_nesting) == sizeof(long long))) __compiletime_assert_304(); } while (0); do { *(volatile typeof(t->trc_reader_nesting) *)&(t->trc_reader_nesting) = (nesting); } while (0); } while (0);
  return;
 }
 ({ bool __ret_do_once = !!(nesting != 0); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/rcupdate_trace.h", 83, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 rcu_read_unlock_trace_special(t);
}

void call_rcu_tasks_trace(struct callback_head *rhp, rcu_callback_t func);
void synchronize_rcu_tasks_trace(void);
void rcu_barrier_tasks_trace(void);
struct task_struct *get_rcu_tasks_trace_gp_kthread(void);
# 30 "../include/linux/bpf.h" 2
# 1 "../include/linux/static_call.h" 1
# 135 "../include/linux/static_call.h"
# 1 "../include/linux/cpu.h" 1
# 17 "../include/linux/cpu.h"
# 1 "../include/linux/node.h" 1
# 29 "../include/linux/node.h"
struct access_coordinate {
 unsigned int read_bandwidth;
 unsigned int write_bandwidth;
 unsigned int read_latency;
 unsigned int write_latency;
};







enum access_coordinate_class {
 ACCESS_COORDINATE_LOCAL,
 ACCESS_COORDINATE_CPU,
 ACCESS_COORDINATE_MAX
};

enum cache_indexing {
 NODE_CACHE_DIRECT_MAP,
 NODE_CACHE_INDEXED,
 NODE_CACHE_OTHER,
};

enum cache_write_policy {
 NODE_CACHE_WRITE_BACK,
 NODE_CACHE_WRITE_THROUGH,
 NODE_CACHE_WRITE_OTHER,
};
# 69 "../include/linux/node.h"
struct node_cache_attrs {
 enum cache_indexing indexing;
 enum cache_write_policy write_policy;
 u64 size;
 u16 line_size;
 u8 level;
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_add_cache(unsigned int nid,
      struct node_cache_attrs *cache_attrs)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_set_perf_attrs(unsigned int nid,
           struct access_coordinate *coord,
           enum access_coordinate_class access)
{
}


struct node {
 struct device dev;
 struct list_head access_list;




};

struct memory_block;
extern struct node *node_devices[];






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void register_memory_blocks_under_node(int nid, unsigned long start_pfn,
           unsigned long end_pfn,
           enum meminit_context context)
{
}


extern void unregister_node(struct node *node);
# 153 "../include/linux/node.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void node_dev_init(void)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __register_one_node(int nid)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_one_node(int nid)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unregister_one_node(int nid)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_cpu_under_node(unsigned int cpu, unsigned int nid)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unregister_cpu_under_node(unsigned int cpu, unsigned int nid)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unregister_memory_block_under_nodes(struct memory_block *mem_blk)
{
}
# 18 "../include/linux/cpu.h" 2

# 1 "../include/linux/cpuhotplug.h" 1
# 57 "../include/linux/cpuhotplug.h"
enum cpuhp_state {
 CPUHP_INVALID = -1,


 CPUHP_OFFLINE = 0,
 CPUHP_CREATE_THREADS,
 CPUHP_PERF_PREPARE,
 CPUHP_PERF_X86_PREPARE,
 CPUHP_PERF_X86_AMD_UNCORE_PREP,
 CPUHP_PERF_POWER,
 CPUHP_PERF_SUPERH,
 CPUHP_X86_HPET_DEAD,
 CPUHP_X86_MCE_DEAD,
 CPUHP_VIRT_NET_DEAD,
 CPUHP_IBMVNIC_DEAD,
 CPUHP_SLUB_DEAD,
 CPUHP_DEBUG_OBJ_DEAD,
 CPUHP_MM_WRITEBACK_DEAD,
 CPUHP_MM_VMSTAT_DEAD,
 CPUHP_SOFTIRQ_DEAD,
 CPUHP_NET_MVNETA_DEAD,
 CPUHP_CPUIDLE_DEAD,
 CPUHP_ARM64_FPSIMD_DEAD,
 CPUHP_ARM_OMAP_WAKE_DEAD,
 CPUHP_IRQ_POLL_DEAD,
 CPUHP_BLOCK_SOFTIRQ_DEAD,
 CPUHP_BIO_DEAD,
 CPUHP_ACPI_CPUDRV_DEAD,
 CPUHP_S390_PFAULT_DEAD,
 CPUHP_BLK_MQ_DEAD,
 CPUHP_FS_BUFF_DEAD,
 CPUHP_PRINTK_DEAD,
 CPUHP_MM_MEMCQ_DEAD,
 CPUHP_PERCPU_CNT_DEAD,
 CPUHP_RADIX_DEAD,
 CPUHP_PAGE_ALLOC,
 CPUHP_NET_DEV_DEAD,
 CPUHP_PCI_XGENE_DEAD,
 CPUHP_IOMMU_IOVA_DEAD,
 CPUHP_AP_ARM_CACHE_B15_RAC_DEAD,
 CPUHP_PADATA_DEAD,
 CPUHP_AP_DTPM_CPU_DEAD,
 CPUHP_RANDOM_PREPARE,
 CPUHP_WORKQUEUE_PREP,
 CPUHP_POWER_NUMA_PREPARE,
 CPUHP_HRTIMERS_PREPARE,
 CPUHP_PROFILE_PREPARE,
 CPUHP_X2APIC_PREPARE,
 CPUHP_SMPCFD_PREPARE,
 CPUHP_RELAY_PREPARE,
 CPUHP_MD_RAID5_PREPARE,
 CPUHP_RCUTREE_PREP,
 CPUHP_CPUIDLE_COUPLED_PREPARE,
 CPUHP_POWERPC_PMAC_PREPARE,
 CPUHP_POWERPC_MMU_CTX_PREPARE,
 CPUHP_XEN_PREPARE,
 CPUHP_XEN_EVTCHN_PREPARE,
 CPUHP_ARM_SHMOBILE_SCU_PREPARE,
 CPUHP_SH_SH3X_PREPARE,
 CPUHP_TOPOLOGY_PREPARE,
 CPUHP_NET_IUCV_PREPARE,
 CPUHP_ARM_BL_PREPARE,
 CPUHP_TRACE_RB_PREPARE,
 CPUHP_MM_ZS_PREPARE,
 CPUHP_MM_ZSWP_POOL_PREPARE,
 CPUHP_KVM_PPC_BOOK3S_PREPARE,
 CPUHP_ZCOMP_PREPARE,
 CPUHP_TIMERS_PREPARE,
 CPUHP_TMIGR_PREPARE,
 CPUHP_MIPS_SOC_PREPARE,
 CPUHP_BP_PREPARE_DYN,
 CPUHP_BP_PREPARE_DYN_END = CPUHP_BP_PREPARE_DYN + 20,
 CPUHP_BP_KICK_AP,
 CPUHP_BRINGUP_CPU,





 CPUHP_AP_IDLE_DEAD,
 CPUHP_AP_OFFLINE,
 CPUHP_AP_CACHECTRL_STARTING,
 CPUHP_AP_SCHED_STARTING,
 CPUHP_AP_RCUTREE_DYING,
 CPUHP_AP_CPU_PM_STARTING,
 CPUHP_AP_IRQ_GIC_STARTING,
 CPUHP_AP_IRQ_HIP04_STARTING,
 CPUHP_AP_IRQ_APPLE_AIC_STARTING,
 CPUHP_AP_IRQ_ARMADA_XP_STARTING,
 CPUHP_AP_IRQ_BCM2836_STARTING,
 CPUHP_AP_IRQ_MIPS_GIC_STARTING,
 CPUHP_AP_IRQ_LOONGARCH_STARTING,
 CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
 CPUHP_AP_IRQ_RISCV_IMSIC_STARTING,
 CPUHP_AP_ARM_MVEBU_COHERENCY,
 CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
 CPUHP_AP_PERF_X86_STARTING,
 CPUHP_AP_PERF_X86_AMD_IBS_STARTING,
 CPUHP_AP_PERF_X86_CSTATE_STARTING,
 CPUHP_AP_PERF_XTENSA_STARTING,
 CPUHP_AP_ARM_VFP_STARTING,
 CPUHP_AP_ARM64_DEBUG_MONITORS_STARTING,
 CPUHP_AP_PERF_ARM_HW_BREAKPOINT_STARTING,
 CPUHP_AP_PERF_ARM_ACPI_STARTING,
 CPUHP_AP_PERF_ARM_STARTING,
 CPUHP_AP_PERF_RISCV_STARTING,
 CPUHP_AP_ARM_L2X0_STARTING,
 CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING,
 CPUHP_AP_ARM_ARCH_TIMER_STARTING,
 CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING,
 CPUHP_AP_ARM_GLOBAL_TIMER_STARTING,
 CPUHP_AP_JCORE_TIMER_STARTING,
 CPUHP_AP_ARM_TWD_STARTING,
 CPUHP_AP_QCOM_TIMER_STARTING,
 CPUHP_AP_TEGRA_TIMER_STARTING,
 CPUHP_AP_ARMADA_TIMER_STARTING,
 CPUHP_AP_MIPS_GIC_TIMER_STARTING,
 CPUHP_AP_ARC_TIMER_STARTING,
 CPUHP_AP_REALTEK_TIMER_STARTING,
 CPUHP_AP_RISCV_TIMER_STARTING,
 CPUHP_AP_CLINT_TIMER_STARTING,
 CPUHP_AP_CSKY_TIMER_STARTING,
 CPUHP_AP_TI_GP_TIMER_STARTING,
 CPUHP_AP_HYPERV_TIMER_STARTING,

 CPUHP_AP_DUMMY_TIMER_STARTING,
 CPUHP_AP_ARM_XEN_STARTING,
 CPUHP_AP_ARM_XEN_RUNSTATE_STARTING,
 CPUHP_AP_ARM_CORESIGHT_STARTING,
 CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
 CPUHP_AP_ARM64_ISNDEP_STARTING,
 CPUHP_AP_SMPCFD_DYING,
 CPUHP_AP_HRTIMERS_DYING,
 CPUHP_AP_TICK_DYING,
 CPUHP_AP_X86_TBOOT_DYING,
 CPUHP_AP_ARM_CACHE_B15_RAC_DYING,
 CPUHP_AP_ONLINE,
 CPUHP_TEARDOWN_CPU,


 CPUHP_AP_ONLINE_IDLE,
 CPUHP_AP_HYPERV_ONLINE,
 CPUHP_AP_KVM_ONLINE,
 CPUHP_AP_SCHED_WAIT_EMPTY,
 CPUHP_AP_SMPBOOT_THREADS,
 CPUHP_AP_IRQ_AFFINITY_ONLINE,
 CPUHP_AP_BLK_MQ_ONLINE,
 CPUHP_AP_ARM_MVEBU_SYNC_CLOCKS,
 CPUHP_AP_X86_INTEL_EPB_ONLINE,
 CPUHP_AP_PERF_ONLINE,
 CPUHP_AP_PERF_X86_ONLINE,
 CPUHP_AP_PERF_X86_UNCORE_ONLINE,
 CPUHP_AP_PERF_X86_AMD_UNCORE_ONLINE,
 CPUHP_AP_PERF_X86_AMD_POWER_ONLINE,
 CPUHP_AP_PERF_X86_RAPL_ONLINE,
 CPUHP_AP_PERF_X86_CSTATE_ONLINE,
 CPUHP_AP_PERF_S390_CF_ONLINE,
 CPUHP_AP_PERF_S390_SF_ONLINE,
 CPUHP_AP_PERF_ARM_CCI_ONLINE,
 CPUHP_AP_PERF_ARM_CCN_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_CPA_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_DDRC_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_HHA_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_L3_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_PA_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_SLLC_ONLINE,
 CPUHP_AP_PERF_ARM_HISI_PCIE_PMU_ONLINE,
 CPUHP_AP_PERF_ARM_HNS3_PMU_ONLINE,
 CPUHP_AP_PERF_ARM_L2X0_ONLINE,
 CPUHP_AP_PERF_ARM_QCOM_L2_ONLINE,
 CPUHP_AP_PERF_ARM_QCOM_L3_ONLINE,
 CPUHP_AP_PERF_ARM_APM_XGENE_ONLINE,
 CPUHP_AP_PERF_ARM_CAVIUM_TX2_UNCORE_ONLINE,
 CPUHP_AP_PERF_ARM_MARVELL_CN10K_DDR_ONLINE,
 CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE,
 CPUHP_AP_PERF_POWERPC_CORE_IMC_ONLINE,
 CPUHP_AP_PERF_POWERPC_THREAD_IMC_ONLINE,
 CPUHP_AP_PERF_POWERPC_TRACE_IMC_ONLINE,
 CPUHP_AP_PERF_POWERPC_HV_24x7_ONLINE,
 CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE,
 CPUHP_AP_PERF_CSKY_ONLINE,
 CPUHP_AP_TMIGR_ONLINE,
 CPUHP_AP_WATCHDOG_ONLINE,
 CPUHP_AP_WORKQUEUE_ONLINE,
 CPUHP_AP_RANDOM_ONLINE,
 CPUHP_AP_RCUTREE_ONLINE,
 CPUHP_AP_BASE_CACHEINFO_ONLINE,
 CPUHP_AP_ONLINE_DYN,
 CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 40,
 CPUHP_AP_X86_HPET_ONLINE,
 CPUHP_AP_X86_KVM_CLK_ONLINE,
 CPUHP_AP_ACTIVE,
 CPUHP_ONLINE,
};

int __cpuhp_setup_state(enum cpuhp_state state, const char *name, bool invoke,
   int (*startup)(unsigned int cpu),
   int (*teardown)(unsigned int cpu), bool multi_instance);

int __cpuhp_setup_state_cpuslocked(enum cpuhp_state state, const char *name,
       bool invoke,
       int (*startup)(unsigned int cpu),
       int (*teardown)(unsigned int cpu),
       bool multi_instance);
# 272 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_setup_state(enum cpuhp_state state,
        const char *name,
        int (*startup)(unsigned int cpu),
        int (*teardown)(unsigned int cpu))
{
 return __cpuhp_setup_state(state, name, true, startup, teardown, false);
}
# 292 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_setup_state_cpuslocked(enum cpuhp_state state,
            const char *name,
            int (*startup)(unsigned int cpu),
            int (*teardown)(unsigned int cpu))
{
 return __cpuhp_setup_state_cpuslocked(state, name, true, startup,
           teardown, false);
}
# 312 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_setup_state_nocalls(enum cpuhp_state state,
         const char *name,
         int (*startup)(unsigned int cpu),
         int (*teardown)(unsigned int cpu))
{
 return __cpuhp_setup_state(state, name, false, startup, teardown,
       false);
}
# 334 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_setup_state_nocalls_cpuslocked(enum cpuhp_state state,
           const char *name,
           int (*startup)(unsigned int cpu),
           int (*teardown)(unsigned int cpu))
{
 return __cpuhp_setup_state_cpuslocked(state, name, false, startup,
         teardown, false);
}
# 355 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_setup_state_multi(enum cpuhp_state state,
       const char *name,
       int (*startup)(unsigned int cpu,
        struct hlist_node *node),
       int (*teardown)(unsigned int cpu,
         struct hlist_node *node))
{
 return __cpuhp_setup_state(state, name, false,
       (void *) startup,
       (void *) teardown, true);
}

int __cpuhp_state_add_instance(enum cpuhp_state state, struct hlist_node *node,
          bool invoke);
int __cpuhp_state_add_instance_cpuslocked(enum cpuhp_state state,
       struct hlist_node *node, bool invoke);
# 383 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_state_add_instance(enum cpuhp_state state,
        struct hlist_node *node)
{
 return __cpuhp_state_add_instance(state, node, true);
}
# 399 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_state_add_instance_nocalls(enum cpuhp_state state,
         struct hlist_node *node)
{
 return __cpuhp_state_add_instance(state, node, false);
}
# 416 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
cpuhp_state_add_instance_nocalls_cpuslocked(enum cpuhp_state state,
         struct hlist_node *node)
{
 return __cpuhp_state_add_instance_cpuslocked(state, node, false);
}

void __cpuhp_remove_state(enum cpuhp_state state, bool invoke);
void __cpuhp_remove_state_cpuslocked(enum cpuhp_state state, bool invoke);
# 433 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_remove_state(enum cpuhp_state state)
{
 __cpuhp_remove_state(state, true);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_remove_state_nocalls(enum cpuhp_state state)
{
 __cpuhp_remove_state(state, false);
}
# 456 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_remove_state_nocalls_cpuslocked(enum cpuhp_state state)
{
 __cpuhp_remove_state_cpuslocked(state, false);
}
# 469 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_remove_multi_state(enum cpuhp_state state)
{
 __cpuhp_remove_state(state, false);
}

int __cpuhp_state_remove_instance(enum cpuhp_state state,
      struct hlist_node *node, bool invoke);
# 486 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_state_remove_instance(enum cpuhp_state state,
           struct hlist_node *node)
{
 return __cpuhp_state_remove_instance(state, node, true);
}
# 500 "../include/linux/cpuhotplug.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_state_remove_instance_nocalls(enum cpuhp_state state,
            struct hlist_node *node)
{
 return __cpuhp_state_remove_instance(state, node, false);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_online_idle(enum cpuhp_state state) { }


struct task_struct;

void cpuhp_ap_sync_alive(void);
void arch_cpuhp_sync_state_poll(void);
void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
bool arch_cpuhp_init_parallel_bringup(void);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_ap_report_dead(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
# 20 "../include/linux/cpu.h" 2
# 1 "../include/linux/cpuhplock.h" 1
# 13 "../include/linux/cpuhplock.h"
struct device;

extern int lockdep_is_cpus_held(void);
# 34 "../include/linux/cpuhplock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpus_write_lock(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpus_write_unlock(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpus_read_lock(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpus_read_unlock(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpus_read_trylock(void) { return true; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lockdep_assert_cpus_held(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_hotplug_disable_offlining(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_hotplug_disable(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_hotplug_enable(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int remove_cpu(unsigned int cpu) { return -1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void smp_shutdown_nonboot_cpus(unsigned int primary_cpu) { }


typedef struct { void *lock; ; } class_cpus_read_lock_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_cpus_read_lock_destructor(class_cpus_read_lock_t *_T) { if (_T->lock) { cpus_read_unlock(); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_cpus_read_lock_lock_ptr(class_cpus_read_lock_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_cpus_read_lock_t class_cpus_read_lock_constructor(void) { class_cpus_read_lock_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; cpus_read_lock(); return _t; }
# 21 "../include/linux/cpu.h" 2
# 1 "../include/linux/cpu_smt.h" 1




enum cpuhp_smt_control {
 CPU_SMT_ENABLED,
 CPU_SMT_DISABLED,
 CPU_SMT_FORCE_DISABLED,
 CPU_SMT_NOT_SUPPORTED,
 CPU_SMT_NOT_IMPLEMENTED,
};
# 25 "../include/linux/cpu_smt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_smt_disable(bool force) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_smt_set_num_threads(unsigned int num_threads,
        unsigned int max_threads) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cpu_smt_possible(void) { return false; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_smt_enable(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) { return 0; }
# 22 "../include/linux/cpu.h" 2

struct device;
struct device_node;
struct attribute_group;

struct cpu {
 int node_id;
 int hotpluggable;
 struct device dev;
};

extern void boot_cpu_init(void);
extern void boot_cpu_hotplug_init(void);
extern void cpu_init(void);
extern void trap_init(void);

extern int register_cpu(struct cpu *cpu, int num);
extern struct device *get_cpu_device(unsigned cpu);
extern bool cpu_is_hotpluggable(unsigned cpu);
extern bool arch_match_cpu_phys_id(int cpu, u64 phys_id);
extern bool arch_find_n_match_cpu_physical_id(struct device_node *cpun,
           int cpu, unsigned int *thread);

extern int cpu_add_dev_attr(struct device_attribute *attr);
extern void cpu_remove_dev_attr(struct device_attribute *attr);

extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);

extern ssize_t cpu_show_meltdown(struct device *dev,
     struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spectre_v1(struct device *dev,
       struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spectre_v2(struct device *dev,
       struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
       struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_l1tf(struct device *dev,
        struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_mds(struct device *dev,
       struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
     struct device_attribute *attr,
     char *buf);
extern ssize_t cpu_show_itlb_multihit(struct device *dev,
          struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_mmio_stale_data(struct device *dev,
     struct device_attribute *attr,
     char *buf);
extern ssize_t cpu_show_retbleed(struct device *dev,
     struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev,
          struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_gds(struct device *dev,
       struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
            struct device_attribute *attr, char *buf);

extern __attribute__((__format__(printf, 4, 5)))
struct device *cpu_device_create(struct device *parent, void *drvdata,
     const struct attribute_group **groups,
     const char *fmt, ...);
extern bool arch_cpu_is_hotpluggable(int cpu);
extern int arch_register_cpu(int cpu);
extern void arch_unregister_cpu(int cpu);







extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_cpu_devices; extern __attribute__((section(".data" ""))) __typeof__(struct cpu) cpu_devices;
# 122 "../include/linux/cpu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_maps_update_begin(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpu_maps_update_done(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int add_cpu(unsigned int cpu) { return 0;}


extern const struct bus_type cpu_subsys;
# 154 "../include/linux/cpu.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void thaw_secondary_cpus(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int suspend_disable_secondary_cpus(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void suspend_enable_secondary_cpus(void) { }


void __attribute__((__noreturn__)) cpu_startup_entry(enum cpuhp_state state);

void cpu_idle_poll_ctrl(bool enable);

bool cpu_in_idle(unsigned long pc);

void arch_cpu_idle(void);
void arch_cpu_idle_prepare(void);
void arch_cpu_idle_enter(void);
void arch_cpu_idle_exit(void);
void arch_tick_broadcast_enter(void);
void arch_tick_broadcast_exit(void);
void __attribute__((__noreturn__)) arch_cpu_idle_dead(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void arch_cpu_finalize_init(void) { }


void play_idle_precise(u64 duration_ns, u64 latency_ns);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void play_idle(unsigned long duration_us)
{
 play_idle_precise(duration_us * 1000L, ((u64)~0ULL));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cpuhp_report_idle_dead(void) { }



extern bool cpu_mitigations_off(void);
extern bool cpu_mitigations_auto_nosmt(void);
# 136 "../include/linux/static_call.h" 2
# 284 "../include/linux/static_call.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int static_call_init(void) { return 0; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long __static_call_return0(void)
{
 return 0;
}
# 306 "../include/linux/static_call.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __static_call_nop(void) { }
# 330 "../include/linux/static_call.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void __static_call_update(struct static_call_key *key, void *tramp, void *func)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_305(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(key->func) == sizeof(char) || sizeof(key->func) == sizeof(short) || sizeof(key->func) == sizeof(int) || sizeof(key->func) == sizeof(long)) || sizeof(key->func) == sizeof(long long))) __compiletime_assert_305(); } while (0); do { *(volatile typeof(key->func) *)&(key->func) = (func); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int static_call_text_reserved(void *start, void *end)
{
 return 0;
}
# 31 "../include/linux/bpf.h" 2
# 1 "../include/linux/memcontrol.h" 1
# 13 "../include/linux/memcontrol.h"
# 1 "../include/linux/cgroup.h" 1
# 15 "../include/linux/cgroup.h"
# 1 "../include/uapi/linux/cgroupstats.h" 1
# 20 "../include/uapi/linux/cgroupstats.h"
# 1 "../include/uapi/linux/taskstats.h" 1
# 41 "../include/uapi/linux/taskstats.h"
struct taskstats {





 __u16 version;
 __u32 ac_exitcode;





 __u8 ac_flag;
 __u8 ac_nice;
# 73 "../include/uapi/linux/taskstats.h"
 __u64 cpu_count __attribute__((aligned(8)));
 __u64 cpu_delay_total;






 __u64 blkio_count;
 __u64 blkio_delay_total;


 __u64 swapin_count;
 __u64 swapin_delay_total;







 __u64 cpu_run_real_total;







 __u64 cpu_run_virtual_total;




 char ac_comm[32];
 __u8 ac_sched __attribute__((aligned(8)));

 __u8 ac_pad[3];
 __u32 ac_uid __attribute__((aligned(8)));

 __u32 ac_gid;
 __u32 ac_pid;
 __u32 ac_ppid;

 __u32 ac_btime;
 __u64 ac_etime __attribute__((aligned(8)));

 __u64 ac_utime;
 __u64 ac_stime;
 __u64 ac_minflt;
 __u64 ac_majflt;
# 133 "../include/uapi/linux/taskstats.h"
 __u64 coremem;



 __u64 virtmem;




 __u64 hiwater_rss;
 __u64 hiwater_vm;


 __u64 read_char;
 __u64 write_char;
 __u64 read_syscalls;
 __u64 write_syscalls;




 __u64 read_bytes;
 __u64 write_bytes;
 __u64 cancelled_write_bytes;

 __u64 nvcsw;
 __u64 nivcsw;


 __u64 ac_utimescaled;
 __u64 ac_stimescaled;
 __u64 cpu_scaled_run_real_total;


 __u64 freepages_count;
 __u64 freepages_delay_total;


 __u64 thrashing_count;
 __u64 thrashing_delay_total;


 __u64 ac_btime64;


 __u64 compact_count;
 __u64 compact_delay_total;


 __u32 ac_tgid;



 __u64 ac_tgetime __attribute__((aligned(8)));







 __u64 ac_exe_dev;
 __u64 ac_exe_inode;



 __u64 wpcopy_count;
 __u64 wpcopy_delay_total;


 __u64 irq_count;
 __u64 irq_delay_total;
};
# 214 "../include/uapi/linux/taskstats.h"
enum {
 TASKSTATS_CMD_UNSPEC = 0,
 TASKSTATS_CMD_GET,
 TASKSTATS_CMD_NEW,
 __TASKSTATS_CMD_MAX,
};



enum {
 TASKSTATS_TYPE_UNSPEC = 0,
 TASKSTATS_TYPE_PID,
 TASKSTATS_TYPE_TGID,
 TASKSTATS_TYPE_STATS,
 TASKSTATS_TYPE_AGGR_PID,
 TASKSTATS_TYPE_AGGR_TGID,
 TASKSTATS_TYPE_NULL,
 __TASKSTATS_TYPE_MAX,
};



enum {
 TASKSTATS_CMD_ATTR_UNSPEC = 0,
 TASKSTATS_CMD_ATTR_PID,
 TASKSTATS_CMD_ATTR_TGID,
 TASKSTATS_CMD_ATTR_REGISTER_CPUMASK,
 TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK,
 __TASKSTATS_CMD_ATTR_MAX,
};
# 21 "../include/uapi/linux/cgroupstats.h" 2
# 30 "../include/uapi/linux/cgroupstats.h"
struct cgroupstats {
 __u64 nr_sleeping;
 __u64 nr_running;
 __u64 nr_stopped;
 __u64 nr_uninterruptible;

 __u64 nr_io_wait;
};







enum {
 CGROUPSTATS_CMD_UNSPEC = __TASKSTATS_CMD_MAX,
 CGROUPSTATS_CMD_GET,
 CGROUPSTATS_CMD_NEW,
 __CGROUPSTATS_CMD_MAX,
};



enum {
 CGROUPSTATS_TYPE_UNSPEC = 0,
 CGROUPSTATS_TYPE_CGROUP_STATS,
 __CGROUPSTATS_TYPE_MAX,
};



enum {
 CGROUPSTATS_CMD_ATTR_UNSPEC = 0,
 CGROUPSTATS_CMD_ATTR_FD,
 __CGROUPSTATS_CMD_ATTR_MAX,
};
# 16 "../include/linux/cgroup.h" 2

# 1 "../include/linux/seq_file.h" 1






# 1 "../include/linux/string_helpers.h" 1





# 1 "../include/linux/ctype.h" 1
# 21 "../include/linux/ctype.h"
extern const unsigned char _ctype[];
# 43 "../include/linux/ctype.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int isdigit(int c)
{
 return '0' <= c && c <= '9';
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char __tolower(unsigned char c)
{
 if ((((_ctype[(int)(unsigned char)(c)])&(0x01)) != 0))
  c -= 'A'-'a';
 return c;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned char __toupper(unsigned char c)
{
 if ((((_ctype[(int)(unsigned char)(c)])&(0x02)) != 0))
  c -= 'a'-'A';
 return c;
}
# 70 "../include/linux/ctype.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char _tolower(const char c)
{
 return c | 0x20;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int isodigit(const char c)
{
 return c >= '0' && c <= '7';
}
# 7 "../include/linux/string_helpers.h" 2
# 1 "../include/linux/string_choices.h" 1






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_enable_disable(bool v)
{
 return v ? "enable" : "disable";
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_enabled_disabled(bool v)
{
 return v ? "enabled" : "disabled";
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_hi_lo(bool v)
{
 return v ? "hi" : "lo";
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_high_low(bool v)
{
 return v ? "high" : "low";
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_read_write(bool v)
{
 return v ? "read" : "write";
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_on_off(bool v)
{
 return v ? "on" : "off";
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_yes_no(bool v)
{
 return v ? "yes" : "no";
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *str_plural(size_t num)
{
 return num == 1 ? "" : "s";
}
# 8 "../include/linux/string_helpers.h" 2



struct device;
struct file;
struct task_struct;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool string_is_terminated(const char *s, int len)
{
 return memchr(s, '\0', len) ? true : false;
}


enum string_size_units {
 STRING_UNITS_10,
 STRING_UNITS_2,
 STRING_UNITS_MASK = ((((1UL))) << (0)),


 STRING_UNITS_NO_SPACE = ((((1UL))) << (30)),
 STRING_UNITS_NO_BYTES = ((((1UL))) << (31)),
};

int string_get_size(u64 size, u64 blk_size, const enum string_size_units units,
      char *buf, int len);

int parse_int_array_user(const char *from, size_t count, int **array);
# 45 "../include/linux/string_helpers.h"
int string_unescape(char *src, char *dst, size_t size, unsigned int flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_unescape_inplace(char *buf, unsigned int flags)
{
 return string_unescape(buf, buf, 0, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_unescape_any(char *src, char *dst, size_t size)
{
 return string_unescape(src, dst, size, (((((1UL))) << (0)) | ((((1UL))) << (1)) | ((((1UL))) << (2)) | ((((1UL))) << (3))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_unescape_any_inplace(char *buf)
{
 return string_unescape_any(buf, buf, 0);
}
# 77 "../include/linux/string_helpers.h"
int string_escape_mem(const char *src, size_t isz, char *dst, size_t osz,
  unsigned int flags, const char *only);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_escape_mem_any_np(const char *src, size_t isz,
  char *dst, size_t osz, const char *only)
{
 return string_escape_mem(src, isz, dst, osz, ((((((1UL))) << (0)) | ((((1UL))) << (3)) | ((((1UL))) << (1)) | ((((1UL))) << (2))) | ((((1UL))) << (4))), only);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_escape_str(const char *src, char *dst, size_t sz,
  unsigned int flags, const char *only)
{
 return string_escape_mem(src, strlen(src), dst, sz, flags, only);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int string_escape_str_any_np(const char *src, char *dst,
  size_t sz, const char *only)
{
 return string_escape_str(src, dst, sz, ((((((1UL))) << (0)) | ((((1UL))) << (3)) | ((((1UL))) << (1)) | ((((1UL))) << (2))) | ((((1UL))) << (4))), only);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void string_upper(char *dst, const char *src)
{
 do {
  *dst++ = __toupper(*src);
 } while (*src++);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void string_lower(char *dst, const char *src)
{
 do {
  *dst++ = __tolower(*src);
 } while (*src++);
}

char *kstrdup_quotable(const char *src, gfp_t gfp);
char *kstrdup_quotable_cmdline(struct task_struct *task, gfp_t gfp);
char *kstrdup_quotable_file(struct file *file, gfp_t gfp);

char *kstrdup_and_replace(const char *src, char old, char new, gfp_t gfp);

char **kasprintf_strarray(gfp_t gfp, const char *prefix, size_t n);
void kfree_strarray(char **array, size_t n);

char **devm_kasprintf_strarray(struct device *dev, const char *prefix, size_t n);
# 8 "../include/linux/seq_file.h" 2






struct seq_operations;

struct seq_file {
 char *buf;
 size_t size;
 size_t from;
 size_t count;
 size_t pad_until;
 loff_t index;
 loff_t read_pos;
 struct mutex lock;
 const struct seq_operations *op;
 int poll_event;
 const struct file *file;
 void *private;
};

struct seq_operations {
 void * (*start) (struct seq_file *m, loff_t *pos);
 void (*stop) (struct seq_file *m, void *v);
 void * (*next) (struct seq_file *m, void *v, loff_t *pos);
 int (*show) (struct seq_file *m, void *v);
};
# 50 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool seq_has_overflowed(struct seq_file *m)
{
 return m->count == m->size;
}
# 63 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t seq_get_buf(struct seq_file *m, char **bufp)
{
 do { if (__builtin_expect(!!(m->count > m->size), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/seq_file.h", 65, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
 if (m->count < m->size)
  *bufp = m->buf + m->count;
 else
  *bufp = ((void *)0);

 return m->size - m->count;
}
# 83 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seq_commit(struct seq_file *m, int num)
{
 if (num < 0) {
  m->count = m->size;
 } else {
  do { if (__builtin_expect(!!(m->count + num > m->size), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/linux/seq_file.h", 88, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);
  m->count += num;
 }
}
# 101 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seq_setwidth(struct seq_file *m, size_t size)
{
 m->pad_until = m->count + size;
}
void seq_pad(struct seq_file *m, char c);

char *mangle_path(char *s, const char *p, const char *esc);
int seq_open(struct file *, const struct seq_operations *);
ssize_t seq_read(struct file *, char *, size_t, loff_t *);
ssize_t seq_read_iter(struct kiocb *iocb, struct iov_iter *iter);
loff_t seq_lseek(struct file *, loff_t, int);
int seq_release(struct inode *, struct file *);
int seq_write(struct seq_file *seq, const void *data, size_t len);

__attribute__((__format__(printf, 2, 0)))
void seq_vprintf(struct seq_file *m, const char *fmt, va_list args);
__attribute__((__format__(printf, 2, 3)))
void seq_printf(struct seq_file *m, const char *fmt, ...);
void seq_putc(struct seq_file *m, char c);
void __seq_puts(struct seq_file *m, const char *s);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void seq_puts(struct seq_file *m, const char *s)
{
 if (!__builtin_constant_p(*s))
  __seq_puts(m, s);
 else if (s[0] && !s[1])
  seq_putc(m, s[0]);
 else
  seq_write(m, s, __builtin_strlen(s));
}

void seq_put_decimal_ull_width(struct seq_file *m, const char *delimiter,
          unsigned long long num, unsigned int width);
void seq_put_decimal_ull(struct seq_file *m, const char *delimiter,
    unsigned long long num);
void seq_put_decimal_ll(struct seq_file *m, const char *delimiter, long long num);
void seq_put_hex_ll(struct seq_file *m, const char *delimiter,
      unsigned long long v, unsigned int width);

void seq_escape_mem(struct seq_file *m, const char *src, size_t len,
      unsigned int flags, const char *esc);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seq_escape_str(struct seq_file *m, const char *src,
      unsigned int flags, const char *esc)
{
 seq_escape_mem(m, src, strlen(src), flags, esc);
}
# 160 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seq_escape(struct seq_file *m, const char *s, const char *esc)
{
 seq_escape_str(m, s, ((((1UL))) << (3)), esc);
}

void seq_hex_dump(struct seq_file *m, const char *prefix_str, int prefix_type,
    int rowsize, int groupsize, const void *buf, size_t len,
    bool ascii);

int seq_path(struct seq_file *, const struct path *, const char *);
int seq_file_path(struct seq_file *, struct file *, const char *);
int seq_dentry(struct seq_file *, struct dentry *, const char *);
int seq_path_root(struct seq_file *m, const struct path *path,
    const struct path *root, const char *esc);

void *single_start(struct seq_file *, loff_t *);
int single_open(struct file *, int (*)(struct seq_file *, void *), void *);
int single_open_size(struct file *, int (*)(struct seq_file *, void *), void *, size_t);
int single_release(struct inode *, struct file *);
void *__seq_open_private(struct file *, const struct seq_operations *, int);
int seq_open_private(struct file *, const struct seq_operations *, int);
int seq_release_private(struct inode *, struct file *);


void seq_bprintf(struct seq_file *m, const char *f, const u32 *binary);
# 248 "../include/linux/seq_file.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_namespace *seq_user_ns(struct seq_file *seq)
{



 extern struct user_namespace init_user_ns;
 return &init_user_ns;

}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void seq_show_option(struct seq_file *m, const char *name,
       const char *value)
{
 seq_putc(m, ',');
 seq_escape(m, name, ",= \t\n\\");
 if (value) {
  seq_putc(m, '=');
  seq_escape(m, value, ", \t\n\\");
 }
}
# 299 "../include/linux/seq_file.h"
extern struct list_head *seq_list_start(struct list_head *head,
  loff_t pos);
extern struct list_head *seq_list_start_head(struct list_head *head,
  loff_t pos);
extern struct list_head *seq_list_next(void *v, struct list_head *head,
  loff_t *ppos);

extern struct list_head *seq_list_start_rcu(struct list_head *head, loff_t pos);
extern struct list_head *seq_list_start_head_rcu(struct list_head *head, loff_t pos);
extern struct list_head *seq_list_next_rcu(void *v, struct list_head *head, loff_t *ppos);





extern struct hlist_node *seq_hlist_start(struct hlist_head *head,
       loff_t pos);
extern struct hlist_node *seq_hlist_start_head(struct hlist_head *head,
            loff_t pos);
extern struct hlist_node *seq_hlist_next(void *v, struct hlist_head *head,
      loff_t *ppos);

extern struct hlist_node *seq_hlist_start_rcu(struct hlist_head *head,
           loff_t pos);
extern struct hlist_node *seq_hlist_start_head_rcu(struct hlist_head *head,
         loff_t pos);
extern struct hlist_node *seq_hlist_next_rcu(void *v,
         struct hlist_head *head,
         loff_t *ppos);


extern struct hlist_node *seq_hlist_start_percpu(struct hlist_head *head, int *cpu, loff_t pos);

extern struct hlist_node *seq_hlist_next_percpu(void *v, struct hlist_head *head, int *cpu, loff_t *pos);

void seq_file_init(void);
# 18 "../include/linux/cgroup.h" 2



# 1 "../include/linux/ns_common.h" 1






struct proc_ns_operations;

struct ns_common {
 struct dentry *stashed;
 const struct proc_ns_operations *ops;
 unsigned int inum;
 refcount_t count;
};
# 22 "../include/linux/cgroup.h" 2
# 1 "../include/linux/nsproxy.h" 1








struct mnt_namespace;
struct uts_namespace;
struct ipc_namespace;
struct pid_namespace;
struct cgroup_namespace;
struct fs_struct;
# 32 "../include/linux/nsproxy.h"
struct nsproxy {
 refcount_t count;
 struct uts_namespace *uts_ns;
 struct ipc_namespace *ipc_ns;
 struct mnt_namespace *mnt_ns;
 struct pid_namespace *pid_ns_for_children;
 struct net *net_ns;
 struct time_namespace *time_ns;
 struct time_namespace *time_ns_for_children;
 struct cgroup_namespace *cgroup_ns;
};
extern struct nsproxy init_nsproxy;
# 65 "../include/linux/nsproxy.h"
struct nsset {
 unsigned flags;
 struct nsproxy *nsproxy;
 struct fs_struct *fs;
 const struct cred *cred;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cred *nsset_cred(struct nsset *set)
{
 if (set->flags & 0x10000000)
  return (struct cred *)set->cred;

 return ((void *)0);
}
# 106 "../include/linux/nsproxy.h"
int copy_namespaces(unsigned long flags, struct task_struct *tsk);
void exit_task_namespaces(struct task_struct *tsk);
void switch_task_namespaces(struct task_struct *tsk, struct nsproxy *new);
int exec_task_namespaces(void);
void free_nsproxy(struct nsproxy *ns);
int unshare_nsproxy_namespaces(unsigned long, struct nsproxy **,
 struct cred *, struct fs_struct *);
int __attribute__((__section__(".init.text"))) __attribute__((__cold__)) nsproxy_cache_init(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_nsproxy(struct nsproxy *ns)
{
 if (refcount_dec_and_test(&ns->count))
  free_nsproxy(ns);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void get_nsproxy(struct nsproxy *ns)
{
 refcount_inc(&ns->count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_put_nsproxy(void *p) { struct nsproxy * _T = *(struct nsproxy * *)p; if (_T) put_nsproxy(_T); }
# 23 "../include/linux/cgroup.h" 2
# 1 "../include/linux/user_namespace.h" 1
# 17 "../include/linux/user_namespace.h"
struct uid_gid_extent {
 u32 first;
 u32 lower_first;
 u32 count;
};

struct uid_gid_map {
 u32 nr_extents;
 union {
  struct uid_gid_extent extent[5];
  struct {
   struct uid_gid_extent *forward;
   struct uid_gid_extent *reverse;
  };
 };
};





struct ucounts;

enum ucount_type {
 UCOUNT_USER_NAMESPACES,
 UCOUNT_PID_NAMESPACES,
 UCOUNT_UTS_NAMESPACES,
 UCOUNT_IPC_NAMESPACES,
 UCOUNT_NET_NAMESPACES,
 UCOUNT_MNT_NAMESPACES,
 UCOUNT_CGROUP_NAMESPACES,
 UCOUNT_TIME_NAMESPACES,





 UCOUNT_FANOTIFY_GROUPS,
 UCOUNT_FANOTIFY_MARKS,

 UCOUNT_COUNTS,
};

enum rlimit_type {
 UCOUNT_RLIMIT_NPROC,
 UCOUNT_RLIMIT_MSGQUEUE,
 UCOUNT_RLIMIT_SIGPENDING,
 UCOUNT_RLIMIT_MEMLOCK,
 UCOUNT_RLIMIT_COUNTS,
};





struct user_namespace {
 struct uid_gid_map uid_map;
 struct uid_gid_map gid_map;
 struct uid_gid_map projid_map;
 struct user_namespace *parent;
 int level;
 kuid_t owner;
 kgid_t group;
 struct ns_common ns;
 unsigned long flags;


 bool parent_could_setfcap;







 struct list_head keyring_name_list;
 struct key *user_keyring_register;
 struct rw_semaphore keyring_sem;




 struct key *persistent_keyring_register;

 struct work_struct work;

 struct ctl_table_set set;
 struct ctl_table_header *sysctls;

 struct ucounts *ucounts;
 long ucount_max[UCOUNT_COUNTS];
 long rlimit_max[UCOUNT_RLIMIT_COUNTS];




} ;

struct ucounts {
 struct hlist_node node;
 struct user_namespace *ns;
 kuid_t uid;
 atomic_t count;
 atomic_long_t ucount[UCOUNT_COUNTS];
 atomic_long_t rlimit[UCOUNT_RLIMIT_COUNTS];
};

extern struct user_namespace init_user_ns;
extern struct ucounts init_ucounts;

bool setup_userns_sysctls(struct user_namespace *ns);
void retire_userns_sysctls(struct user_namespace *ns);
struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type);
void dec_ucount(struct ucounts *ucounts, enum ucount_type type);
struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid);
struct ucounts * __attribute__((__warn_unused_result__)) get_ucounts(struct ucounts *ucounts);
void put_ucounts(struct ucounts *ucounts);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long get_rlimit_value(struct ucounts *ucounts, enum rlimit_type type)
{
 return atomic_long_read(&ucounts->rlimit[type]);
}

long inc_rlimit_ucounts(struct ucounts *ucounts, enum rlimit_type type, long v);
bool dec_rlimit_ucounts(struct ucounts *ucounts, enum rlimit_type type, long v);
long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type);
void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum rlimit_type type);
bool is_rlimit_overlimit(struct ucounts *ucounts, enum rlimit_type type, unsigned long max);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long get_userns_rlimit_max(struct user_namespace *ns, enum rlimit_type type)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_306(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->rlimit_max[type]) == sizeof(char) || sizeof(ns->rlimit_max[type]) == sizeof(short) || sizeof(ns->rlimit_max[type]) == sizeof(int) || sizeof(ns->rlimit_max[type]) == sizeof(long)) || sizeof(ns->rlimit_max[type]) == sizeof(long long))) __compiletime_assert_306(); } while (0); (*(const volatile typeof( _Generic((ns->rlimit_max[type]), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->rlimit_max[type]))) *)&(ns->rlimit_max[type])); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_userns_rlimit_max(struct user_namespace *ns,
  enum rlimit_type type, unsigned long max)
{
 ns->rlimit_max[type] = max <= ((long)(~0UL >> 1)) ? max : ((long)(~0UL >> 1));
}
# 192 "../include/linux/user_namespace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_namespace *get_user_ns(struct user_namespace *ns)
{
 return &init_user_ns;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int create_user_ns(struct cred *new)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unshare_userns(unsigned long unshare_flags,
     struct cred **new_cred)
{
 if (unshare_flags & 0x10000000)
  return -22;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_user_ns(struct user_namespace *ns)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool userns_may_setgroups(const struct user_namespace *ns)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool in_userns(const struct user_namespace *ancestor,
        const struct user_namespace *child)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool current_in_userns(const struct user_namespace *target_ns)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ns_common *ns_get_owner(struct ns_common *ns)
{
 return ERR_PTR(-1);
}
# 24 "../include/linux/cgroup.h" 2

# 1 "../include/linux/kernel_stat.h" 1







# 1 "../include/linux/interrupt.h" 1
# 22 "../include/linux/interrupt.h"
# 1 "./arch/hexagon/include/generated/asm/sections.h" 1
# 23 "../include/linux/interrupt.h" 2
# 99 "../include/linux/interrupt.h"
enum {
 IRQC_IS_HARDIRQ = 0,
 IRQC_IS_NESTED,
};

typedef irqreturn_t (*irq_handler_t)(int, void *);
# 122 "../include/linux/interrupt.h"
struct irqaction {
 irq_handler_t handler;
 void *dev_id;
 void *percpu_dev_id;
 struct irqaction *next;
 irq_handler_t thread_fn;
 struct task_struct *thread;
 struct irqaction *secondary;
 unsigned int irq;
 unsigned int flags;
 unsigned long thread_flags;
 unsigned long thread_mask;
 const char *name;
 struct proc_dir_entry *dir;
} ;

extern irqreturn_t no_action(int cpl, void *dev_id);
# 150 "../include/linux/interrupt.h"
extern int __attribute__((__warn_unused_result__))
request_threaded_irq(unsigned int irq, irq_handler_t handler,
       irq_handler_t thread_fn,
       unsigned long flags, const char *name, void *dev);
# 168 "../include/linux/interrupt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__))
request_irq(unsigned int irq, irq_handler_t handler, unsigned long flags,
     const char *name, void *dev)
{
 return request_threaded_irq(irq, handler, ((void *)0), flags | 0x00200000, name, dev);
}

extern int __attribute__((__warn_unused_result__))
request_any_context_irq(unsigned int irq, irq_handler_t handler,
   unsigned long flags, const char *name, void *dev_id);

extern int __attribute__((__warn_unused_result__))
__request_percpu_irq(unsigned int irq, irq_handler_t handler,
       unsigned long flags, const char *devname,
       void *percpu_dev_id);

extern int __attribute__((__warn_unused_result__))
request_nmi(unsigned int irq, irq_handler_t handler, unsigned long flags,
     const char *name, void *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__))
request_percpu_irq(unsigned int irq, irq_handler_t handler,
     const char *devname, void *percpu_dev_id)
{
 return __request_percpu_irq(irq, handler, 0,
        devname, percpu_dev_id);
}

extern int __attribute__((__warn_unused_result__))
request_percpu_nmi(unsigned int irq, irq_handler_t handler,
     const char *devname, void *dev);

extern const void *free_irq(unsigned int, void *);
extern void free_percpu_irq(unsigned int, void *);

extern const void *free_nmi(unsigned int irq, void *dev_id);
extern void free_percpu_nmi(unsigned int irq, void *percpu_dev_id);

struct device;

extern int __attribute__((__warn_unused_result__))
devm_request_threaded_irq(struct device *dev, unsigned int irq,
     irq_handler_t handler, irq_handler_t thread_fn,
     unsigned long irqflags, const char *devname,
     void *dev_id);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __attribute__((__warn_unused_result__))
devm_request_irq(struct device *dev, unsigned int irq, irq_handler_t handler,
   unsigned long irqflags, const char *devname, void *dev_id)
{
 return devm_request_threaded_irq(dev, irq, handler, ((void *)0), irqflags,
      devname, dev_id);
}

extern int __attribute__((__warn_unused_result__))
devm_request_any_context_irq(struct device *dev, unsigned int irq,
   irq_handler_t handler, unsigned long irqflags,
   const char *devname, void *dev_id);

extern void devm_free_irq(struct device *dev, unsigned int irq, void *dev_id);

bool irq_has_action(unsigned int irq);
extern void disable_irq_nosync(unsigned int irq);
extern bool disable_hardirq(unsigned int irq);
extern void disable_irq(unsigned int irq);
extern void disable_percpu_irq(unsigned int irq);
extern void enable_irq(unsigned int irq);
extern void enable_percpu_irq(unsigned int irq, unsigned int type);
extern bool irq_percpu_is_enabled(unsigned int irq);
extern void irq_wake_thread(unsigned int irq, void *dev_id);

typedef struct { int *lock; ; } class_disable_irq_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_disable_irq_destructor(class_disable_irq_t *_T) { if (_T->lock) { enable_irq(*_T->lock); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_disable_irq_lock_ptr(class_disable_irq_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_disable_irq_t class_disable_irq_constructor(int *l) { class_disable_irq_t _t = { .lock = l }, *_T = &_t; disable_irq(*_T->lock); return _t; }


extern void disable_nmi_nosync(unsigned int irq);
extern void disable_percpu_nmi(unsigned int irq);
extern void enable_nmi(unsigned int irq);
extern void enable_percpu_nmi(unsigned int irq, unsigned int type);
extern int prepare_percpu_nmi(unsigned int irq);
extern void teardown_percpu_nmi(unsigned int irq);

extern int irq_inject_interrupt(unsigned int irq);


extern void suspend_device_irqs(void);
extern void resume_device_irqs(void);
extern void rearm_wake_irq(unsigned int irq);
# 268 "../include/linux/interrupt.h"
struct irq_affinity_notify {
 unsigned int irq;
 struct kref kref;
 struct work_struct work;
 void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask);
 void (*release)(struct kref *ref);
};
# 292 "../include/linux/interrupt.h"
struct irq_affinity {
 unsigned int pre_vectors;
 unsigned int post_vectors;
 unsigned int nr_sets;
 unsigned int set_size[4];
 void (*calc_sets)(struct irq_affinity *, unsigned int nvecs);
 void *priv;
};






struct irq_affinity_desc {
 struct cpumask mask;
 unsigned int is_managed : 1;
};
# 375 "../include/linux/interrupt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_set_affinity(unsigned int irq, const struct cpumask *m)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_force_affinity(unsigned int irq, const struct cpumask *cpumask)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_can_set_affinity(unsigned int irq)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_select_affinity(unsigned int irq) { return 0; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_update_affinity_hint(unsigned int irq,
        const struct cpumask *m)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_set_affinity_and_hint(unsigned int irq,
         const struct cpumask *m)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_set_affinity_hint(unsigned int irq,
     const struct cpumask *m)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int irq_update_affinity_desc(unsigned int irq,
        struct irq_affinity_desc *affinity)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct irq_affinity_desc *
irq_create_affinity_masks(unsigned int nvec, struct irq_affinity *affd)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int
irq_calc_affinity_vectors(unsigned int minvec, unsigned int maxvec,
     const struct irq_affinity *affd)
{
 return maxvec;
}
# 448 "../include/linux/interrupt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void disable_irq_nosync_lockdep(unsigned int irq)
{
 disable_irq_nosync(irq);

 do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void disable_irq_nosync_lockdep_irqsave(unsigned int irq, unsigned long *flags)
{
 disable_irq_nosync(irq);

 do { do { ({ unsigned long __dummy; typeof(*flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); *flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(*flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(*flags); })) trace_hardirqs_off(); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void disable_irq_lockdep(unsigned int irq)
{
 disable_irq(irq);

 do { bool was_disabled = (arch_irqs_disabled()); arch_local_irq_disable(); if (!was_disabled) trace_hardirqs_off(); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void enable_irq_lockdep(unsigned int irq)
{

 do { trace_hardirqs_on(); arch_local_irq_enable(); } while (0);

 enable_irq(irq);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void enable_irq_lockdep_irqrestore(unsigned int irq, unsigned long *flags)
{

 do { if (!({ ({ unsigned long __dummy; typeof(*flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(*flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(*flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(*flags); } while (0); } while (0);

 enable_irq(irq);
}


extern int irq_set_irq_wake(unsigned int irq, unsigned int on);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int enable_irq_wake(unsigned int irq)
{
 return irq_set_irq_wake(irq, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int disable_irq_wake(unsigned int irq)
{
 return irq_set_irq_wake(irq, 0);
}




enum irqchip_irq_state {
 IRQCHIP_STATE_PENDING,
 IRQCHIP_STATE_ACTIVE,
 IRQCHIP_STATE_MASKED,
 IRQCHIP_STATE_LINE_LEVEL,
};

extern int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
     bool *state);
extern int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
     bool state);
# 555 "../include/linux/interrupt.h"
enum
{
 HI_SOFTIRQ=0,
 TIMER_SOFTIRQ,
 NET_TX_SOFTIRQ,
 NET_RX_SOFTIRQ,
 BLOCK_SOFTIRQ,
 IRQ_POLL_SOFTIRQ,
 TASKLET_SOFTIRQ,
 SCHED_SOFTIRQ,
 HRTIMER_SOFTIRQ,
 RCU_SOFTIRQ,

 NR_SOFTIRQS
};
# 589 "../include/linux/interrupt.h"
extern const char * const softirq_to_name[NR_SOFTIRQS];





struct softirq_action
{
 void (*action)(struct softirq_action *);
};

           void do_softirq(void);
           void __do_softirq(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void do_softirq_post_smp_call_flush(unsigned int unused)
{
 do_softirq();
}


extern void open_softirq(int nr, void (*action)(struct softirq_action *));
extern void softirq_init(void);
extern void __raise_softirq_irqoff(unsigned int nr);

extern void raise_softirq_irqoff(unsigned int nr);
extern void raise_softirq(unsigned int nr);

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_ksoftirqd; extern __attribute__((section(".data" ""))) __typeof__(struct task_struct *) ksoftirqd;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *this_cpu_ksoftirqd(void)
{
 return ({ typeof(ksoftirqd) pscr_ret__; do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(ksoftirqd)) { case 1: pscr_ret__ = ({ typeof(ksoftirqd) __ret; if ((sizeof(ksoftirqd) == sizeof(char) || sizeof(ksoftirqd) == sizeof(short) || sizeof(ksoftirqd) == sizeof(int) || sizeof(ksoftirqd) == sizeof(long))) __ret = ({ typeof(ksoftirqd) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_307(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long long))) __compiletime_assert_307(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(ksoftirqd) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(ksoftirqd) __ret; if ((sizeof(ksoftirqd) == sizeof(char) || sizeof(ksoftirqd) == sizeof(short) || sizeof(ksoftirqd) == sizeof(int) || sizeof(ksoftirqd) == sizeof(long))) __ret = ({ typeof(ksoftirqd) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_308(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long long))) __compiletime_assert_308(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(ksoftirqd) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(ksoftirqd) __ret; if ((sizeof(ksoftirqd) == sizeof(char) || sizeof(ksoftirqd) == sizeof(short) || sizeof(ksoftirqd) == sizeof(int) || sizeof(ksoftirqd) == sizeof(long))) __ret = ({ typeof(ksoftirqd) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_309(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long long))) __compiletime_assert_309(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(ksoftirqd) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(ksoftirqd) __ret; if ((sizeof(ksoftirqd) == sizeof(char) || sizeof(ksoftirqd) == sizeof(short) || sizeof(ksoftirqd) == sizeof(int) || sizeof(ksoftirqd) == sizeof(long))) __ret = ({ typeof(ksoftirqd) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_310(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })) == sizeof(long long))) __compiletime_assert_310(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(ksoftirqd) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(ksoftirqd)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(ksoftirqd))) *)(&(ksoftirqd)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; });
}
# 649 "../include/linux/interrupt.h"
struct tasklet_struct
{
 struct tasklet_struct *next;
 unsigned long state;
 atomic_t count;
 bool use_callback;
 union {
  void (*func)(unsigned long data);
  void (*callback)(struct tasklet_struct *t);
 };
 unsigned long data;
};
# 691 "../include/linux/interrupt.h"
enum
{
 TASKLET_STATE_SCHED,
 TASKLET_STATE_RUN
};
# 708 "../include/linux/interrupt.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int tasklet_trylock(struct tasklet_struct *t) { return 1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_unlock(struct tasklet_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_unlock_wait(struct tasklet_struct *t) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_unlock_spin_wait(struct tasklet_struct *t) { }


extern void __tasklet_schedule(struct tasklet_struct *t);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_schedule(struct tasklet_struct *t)
{
 if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
  __tasklet_schedule(t);
}

extern void __tasklet_hi_schedule(struct tasklet_struct *t);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_hi_schedule(struct tasklet_struct *t)
{
 if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
  __tasklet_hi_schedule(t);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_disable_nosync(struct tasklet_struct *t)
{
 atomic_inc(&t->count);
 __asm__ __volatile__("": : :"memory");
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_disable_in_atomic(struct tasklet_struct *t)
{
 tasklet_disable_nosync(t);
 tasklet_unlock_spin_wait(t);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_disable(struct tasklet_struct *t)
{
 tasklet_disable_nosync(t);
 tasklet_unlock_wait(t);
 __asm__ __volatile__("": : :"memory");
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tasklet_enable(struct tasklet_struct *t)
{
 __asm__ __volatile__("": : :"memory");
 atomic_dec(&t->count);
}

extern void tasklet_kill(struct tasklet_struct *t);
extern void tasklet_init(struct tasklet_struct *t,
    void (*func)(unsigned long), unsigned long data);
extern void tasklet_setup(struct tasklet_struct *t,
     void (*callback)(struct tasklet_struct *));
# 808 "../include/linux/interrupt.h"
extern unsigned long probe_irq_on(void);
extern int probe_irq_off(unsigned long);
extern unsigned int probe_irq_mask(unsigned long);




extern void init_irq_proc(void);
# 828 "../include/linux/interrupt.h"
struct seq_file;
int show_interrupts(struct seq_file *p, void *v);
int arch_show_interrupts(struct seq_file *p, int prec);

extern int early_irq_init(void);
extern int arch_probe_nr_irqs(void);
extern int arch_early_irq_init(void);
# 9 "../include/linux/kernel_stat.h" 2
# 19 "../include/linux/kernel_stat.h"
enum cpu_usage_stat {
 CPUTIME_USER,
 CPUTIME_NICE,
 CPUTIME_SYSTEM,
 CPUTIME_SOFTIRQ,
 CPUTIME_IRQ,
 CPUTIME_IDLE,
 CPUTIME_IOWAIT,
 CPUTIME_STEAL,
 CPUTIME_GUEST,
 CPUTIME_GUEST_NICE,



 NR_STATS,
};

struct kernel_cpustat {
 u64 cpustat[NR_STATS];
};

struct kernel_stat {
 unsigned long irqs_sum;
 unsigned int softirqs[NR_SOFTIRQS];
};

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_kstat; extern __attribute__((section(".data" ""))) __typeof__(struct kernel_stat) kstat;
extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_kernel_cpustat; extern __attribute__((section(".data" ""))) __typeof__(struct kernel_cpustat) kernel_cpustat;







extern unsigned long long nr_context_switches_cpu(int cpu);
extern unsigned long long nr_context_switches(void);

extern unsigned int kstat_irqs_cpu(unsigned int irq, int cpu);
extern void kstat_incr_irq_this_cpu(unsigned int irq);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kstat_incr_softirqs_this_cpu(unsigned int irq)
{
 ({ __this_cpu_preempt_check("add"); do { do { const void *__vpp_verify = (typeof((&(kstat.softirqs[irq])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(kstat.softirqs[irq])) { case 1: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(kstat.softirqs[irq])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat.softirqs[irq]))) *)(&(kstat.softirqs[irq])); }); }) += 1; } while (0);break; case 2: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(kstat.softirqs[irq])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat.softirqs[irq]))) *)(&(kstat.softirqs[irq])); }); }) += 1; } while (0);break; case 4: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(kstat.softirqs[irq])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat.softirqs[irq]))) *)(&(kstat.softirqs[irq])); }); }) += 1; } while (0);break; case 8: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(kstat.softirqs[irq])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat.softirqs[irq]))) *)(&(kstat.softirqs[irq])); }); }) += 1; } while (0);break; default: __bad_size_call_parameter();break; } } while (0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int kstat_softirqs_cpu(unsigned int irq, int cpu)
{
       return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(kstat)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat))) *)(&(kstat)); }); })).softirqs[irq];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int kstat_cpu_softirqs_sum(int cpu)
{
 int i;
 unsigned int sum = 0;

 for (i = 0; i < NR_SOFTIRQS; i++)
  sum += kstat_softirqs_cpu(i, cpu);

 return sum;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kstat_snapshot_irqs(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int kstat_get_irq_since_snapshot(unsigned int irq) { return 0; }





extern unsigned int kstat_irqs_usr(unsigned int irq);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long kstat_cpu_irqs_sum(unsigned int cpu)
{
 return (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(kstat)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kstat))) *)(&(kstat)); }); })).irqs_sum;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 kcpustat_field(struct kernel_cpustat *kcpustat,
     enum cpu_usage_stat usage, int cpu)
{
 return kcpustat->cpustat[usage];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
{
 *dst = (*({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((&(kernel_cpustat)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(kernel_cpustat))) *)(&(kernel_cpustat)); }); }));
}



extern void account_user_time(struct task_struct *, u64);
extern void account_guest_time(struct task_struct *, u64);
extern void account_system_time(struct task_struct *, int, u64);
extern void account_system_index_time(struct task_struct *, u64,
          enum cpu_usage_stat);
extern void account_steal_time(u64);
extern void account_idle_time(u64);
extern u64 get_idle_time(struct kernel_cpustat *kcs, int cpu);







extern void account_process_tick(struct task_struct *, int user);


extern void account_idle_ticks(unsigned long ticks);
# 26 "../include/linux/cgroup.h" 2

# 1 "../include/linux/cgroup-defs.h" 1
# 20 "../include/linux/cgroup-defs.h"
# 1 "../include/linux/u64_stats_sync.h" 1
# 64 "../include/linux/u64_stats_sync.h"
struct u64_stats_sync {

 seqcount_t seq;

};
# 114 "../include/linux/u64_stats_sync.h"
typedef struct {
 u64 v;
} u64_stats_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 u64_stats_read(const u64_stats_t *p)
{
 return p->v;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_set(u64_stats_t *p, u64 val)
{
 p->v = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_add(u64_stats_t *p, unsigned long val)
{
 p->v += val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_inc(u64_stats_t *p)
{
 p->v++;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __u64_stats_update_begin(struct u64_stats_sync *syncp)
{
 do { if (0) __asm__ __volatile__("": : :"memory"); else do { ({ bool __ret_do_once = !!(0 && (debug_locks && !({ typeof(lockdep_recursion) pscr_ret__; do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(lockdep_recursion)) { case 1: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_311(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_311(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_312(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_312(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_313(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_313(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(lockdep_recursion) __ret; if ((sizeof(lockdep_recursion) == sizeof(char) || sizeof(lockdep_recursion) == sizeof(short) || sizeof(lockdep_recursion) == sizeof(int) || sizeof(lockdep_recursion) == sizeof(long))) __ret = ({ typeof(lockdep_recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_314(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })) == sizeof(long long))) __compiletime_assert_314(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(lockdep_recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(lockdep_recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(lockdep_recursion))) *)(&(lockdep_recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; })) && (preempt_count() == 0 && ({ typeof(hardirqs_enabled) pscr_ret__; do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(hardirqs_enabled)) { case 1: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_315(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_315(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_316(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_316(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_317(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_317(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(hardirqs_enabled) __ret; if ((sizeof(hardirqs_enabled) == sizeof(char) || sizeof(hardirqs_enabled) == sizeof(short) || sizeof(hardirqs_enabled) == sizeof(int) || sizeof(hardirqs_enabled) == sizeof(long))) __ret = ({ typeof(hardirqs_enabled) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_318(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })) == sizeof(long long))) __compiletime_assert_318(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(hardirqs_enabled) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(hardirqs_enabled)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(hardirqs_enabled))) *)(&(hardirqs_enabled)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; }))); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/u64_stats_sync.h", 146, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }); } while (0); } while (0);
 do { _Generic(*(&syncp->seq), seqcount_t: __seqprop_assert, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_assert, seqcount_spinlock_t: __seqprop_spinlock_assert, seqcount_rwlock_t: __seqprop_rwlock_assert, seqcount_mutex_t: __seqprop_mutex_assert)(&syncp->seq); if (_Generic(*(&syncp->seq), seqcount_t: __seqprop_preemptible, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_preemptible, seqcount_spinlock_t: __seqprop_spinlock_preemptible, seqcount_rwlock_t: __seqprop_rwlock_preemptible, seqcount_mutex_t: __seqprop_mutex_preemptible)(&syncp->seq)) __asm__ __volatile__("": : :"memory"); do_write_seqcount_begin(_Generic(*(&syncp->seq), seqcount_t: __seqprop_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_ptr, seqcount_spinlock_t: __seqprop_spinlock_ptr, seqcount_rwlock_t: __seqprop_rwlock_ptr, seqcount_mutex_t: __seqprop_mutex_ptr)(&syncp->seq)); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __u64_stats_update_end(struct u64_stats_sync *syncp)
{
 do { do_write_seqcount_end(_Generic(*(&syncp->seq), seqcount_t: __seqprop_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_ptr, seqcount_spinlock_t: __seqprop_spinlock_ptr, seqcount_rwlock_t: __seqprop_rwlock_ptr, seqcount_mutex_t: __seqprop_mutex_ptr)(&syncp->seq)); if (_Generic(*(&syncp->seq), seqcount_t: __seqprop_preemptible, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_preemptible, seqcount_spinlock_t: __seqprop_spinlock_preemptible, seqcount_rwlock_t: __seqprop_rwlock_preemptible, seqcount_mutex_t: __seqprop_mutex_preemptible)(&syncp->seq)) __asm__ __volatile__("": : :"memory"); } while (0);
 preempt_enable_nested();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long __u64_stats_irqsave(void)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __u64_stats_irqrestore(unsigned long flags)
{
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
{
 return ({ seqcount_lockdep_reader_access(_Generic(*(&syncp->seq), seqcount_t: __seqprop_const_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_const_ptr, seqcount_spinlock_t: __seqprop_spinlock_const_ptr, seqcount_rwlock_t: __seqprop_rwlock_const_ptr, seqcount_mutex_t: __seqprop_mutex_const_ptr)(&syncp->seq)); ({ unsigned _seq = ({ unsigned __seq; while ((__seq = _Generic(*(&syncp->seq), seqcount_t: __seqprop_sequence, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_sequence, seqcount_spinlock_t: __seqprop_spinlock_sequence, seqcount_rwlock_t: __seqprop_rwlock_sequence, seqcount_mutex_t: __seqprop_mutex_sequence)(&syncp->seq)) & 1) __vmyield(); kcsan_atomic_next(1000); __seq; }); __asm__ __volatile__("": : :"memory"); _seq; }); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
        unsigned int start)
{
 return do_read_seqcount_retry(_Generic(*(&syncp->seq), seqcount_t: __seqprop_const_ptr, seqcount_raw_spinlock_t: __seqprop_raw_spinlock_const_ptr, seqcount_spinlock_t: __seqprop_spinlock_const_ptr, seqcount_rwlock_t: __seqprop_rwlock_const_ptr, seqcount_mutex_t: __seqprop_mutex_const_ptr)(&syncp->seq), start);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_update_begin(struct u64_stats_sync *syncp)
{
 __u64_stats_update_begin(syncp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_update_end(struct u64_stats_sync *syncp)
{
 __u64_stats_update_end(syncp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long u64_stats_update_begin_irqsave(struct u64_stats_sync *syncp)
{
 unsigned long flags = __u64_stats_irqsave();

 __u64_stats_update_begin(syncp);
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void u64_stats_update_end_irqrestore(struct u64_stats_sync *syncp,
         unsigned long flags)
{
 __u64_stats_update_end(syncp);
 __u64_stats_irqrestore(flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
{
 return __u64_stats_fetch_begin(syncp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
      unsigned int start)
{
 return __u64_stats_fetch_retry(syncp, start);
}
# 21 "../include/linux/cgroup-defs.h" 2

# 1 "../include/linux/bpf-cgroup-defs.h" 1
# 81 "../include/linux/bpf-cgroup-defs.h"
struct cgroup_bpf {};
# 23 "../include/linux/cgroup-defs.h" 2
# 1 "../include/linux/psi_types.h" 1




# 1 "../include/linux/kthread.h" 1







struct mm_struct;

__attribute__((__format__(printf, 4, 5)))
struct task_struct *kthread_create_on_node(int (*threadfn)(void *data),
        void *data,
        int node,
        const char namefmt[], ...);
# 31 "../include/linux/kthread.h"
struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
       void *data,
       unsigned int cpu,
       const char *namefmt);

void get_kthread_comm(char *buf, size_t buf_size, struct task_struct *tsk);
bool set_kthread_struct(struct task_struct *p);

void kthread_set_per_cpu(struct task_struct *k, int cpu);
bool kthread_is_per_cpu(struct task_struct *k);
# 72 "../include/linux/kthread.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct task_struct *
kthread_run_on_cpu(int (*threadfn)(void *data), void *data,
   unsigned int cpu, const char *namefmt)
{
 struct task_struct *p;

 p = kthread_create_on_cpu(threadfn, data, cpu, namefmt);
 if (!IS_ERR(p))
  wake_up_process(p);

 return p;
}

void free_kthread_struct(struct task_struct *k);
void kthread_bind(struct task_struct *k, unsigned int cpu);
void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask);
int kthread_stop(struct task_struct *k);
int kthread_stop_put(struct task_struct *k);
bool kthread_should_stop(void);
bool kthread_should_park(void);
bool kthread_should_stop_or_park(void);
bool kthread_freezable_should_stop(bool *was_frozen);
void *kthread_func(struct task_struct *k);
void *kthread_data(struct task_struct *k);
void *kthread_probe_data(struct task_struct *k);
int kthread_park(struct task_struct *k);
void kthread_unpark(struct task_struct *k);
void kthread_parkme(void);
void kthread_exit(long result) __attribute__((__noreturn__));
void kthread_complete_and_exit(struct completion *, long) __attribute__((__noreturn__));

int kthreadd(void *unused);
extern struct task_struct *kthreadd_task;
extern int tsk_fork_get_node(struct task_struct *tsk);
# 115 "../include/linux/kthread.h"
struct kthread_work;
typedef void (*kthread_work_func_t)(struct kthread_work *work);
void kthread_delayed_work_timer_fn(struct timer_list *t);

enum {
 KTW_FREEZABLE = 1 << 0,
};

struct kthread_worker {
 unsigned int flags;
 raw_spinlock_t lock;
 struct list_head work_list;
 struct list_head delayed_work_list;
 struct task_struct *task;
 struct kthread_work *current_work;
};

struct kthread_work {
 struct list_head node;
 kthread_work_func_t func;
 struct kthread_worker *worker;

 int canceling;
};

struct kthread_delayed_work {
 struct kthread_work work;
 struct timer_list timer;
};
# 163 "../include/linux/kthread.h"
extern void __kthread_init_worker(struct kthread_worker *worker,
   const char *name, struct lock_class_key *key);
# 187 "../include/linux/kthread.h"
int kthread_worker_fn(void *worker_ptr);

__attribute__((__format__(printf, 2, 3)))
struct kthread_worker *
kthread_create_worker(unsigned int flags, const char namefmt[], ...);

__attribute__((__format__(printf, 3, 4))) struct kthread_worker *
kthread_create_worker_on_cpu(int cpu, unsigned int flags,
        const char namefmt[], ...);

bool kthread_queue_work(struct kthread_worker *worker,
   struct kthread_work *work);

bool kthread_queue_delayed_work(struct kthread_worker *worker,
    struct kthread_delayed_work *dwork,
    unsigned long delay);

bool kthread_mod_delayed_work(struct kthread_worker *worker,
         struct kthread_delayed_work *dwork,
         unsigned long delay);

void kthread_flush_work(struct kthread_work *work);
void kthread_flush_worker(struct kthread_worker *worker);

bool kthread_cancel_work_sync(struct kthread_work *work);
bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *work);

void kthread_destroy_worker(struct kthread_worker *worker);

void kthread_use_mm(struct mm_struct *mm);
void kthread_unuse_mm(struct mm_struct *mm);

struct cgroup_subsys_state;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void kthread_associate_blkcg(struct cgroup_subsys_state *css) { }
# 6 "../include/linux/psi_types.h" 2








enum psi_task_count {
 NR_IOWAIT,
 NR_MEMSTALL,
 NR_RUNNING,
# 27 "../include/linux/psi_types.h"
 NR_MEMSTALL_RUNNING,
 NR_PSI_TASK_COUNTS = 4,
};
# 41 "../include/linux/psi_types.h"
enum psi_res {
 PSI_IO,
 PSI_MEM,
 PSI_CPU,



 NR_PSI_RESOURCES,
};







enum psi_states {
 PSI_IO_SOME,
 PSI_IO_FULL,
 PSI_MEM_SOME,
 PSI_MEM_FULL,
 PSI_CPU_SOME,
 PSI_CPU_FULL,




 PSI_NONIDLE,
 NR_PSI_STATES,
};







enum psi_aggregators {
 PSI_AVGS = 0,
 PSI_POLL,
 NR_PSI_AGGREGATORS,
};

struct psi_group_cpu {



 seqcount_t seq ;


 unsigned int tasks[NR_PSI_TASK_COUNTS];


 u32 state_mask;


 u32 times[NR_PSI_STATES];


 u64 state_start;




 u32 times_prev[NR_PSI_AGGREGATORS][NR_PSI_STATES]
                               ;
};


struct psi_window {

 u64 size;


 u64 start_time;


 u64 start_value;


 u64 prev_growth;
};

struct psi_trigger {

 enum psi_states state;


 u64 threshold;


 struct list_head node;


 struct psi_group *group;


 wait_queue_head_t event_wait;


 struct kernfs_open_file *of;


 int event;


 struct psi_window win;





 u64 last_event_time;


 bool pending_event;


 enum psi_aggregators aggregator;
};

struct psi_group {
 struct psi_group *parent;
 bool enabled;


 struct mutex avgs_lock;


 struct psi_group_cpu *pcpu;


 u64 avg_total[NR_PSI_STATES - 1];
 u64 avg_last_update;
 u64 avg_next_update;


 struct delayed_work avgs_work;


 struct list_head avg_triggers;
 u32 avg_nr_triggers[NR_PSI_STATES - 1];


 u64 total[NR_PSI_AGGREGATORS][NR_PSI_STATES - 1];
 unsigned long avg[NR_PSI_STATES - 1][3];


 struct task_struct *rtpoll_task;
 struct timer_list rtpoll_timer;
 wait_queue_head_t rtpoll_wait;
 atomic_t rtpoll_wakeup;
 atomic_t rtpoll_scheduled;


 struct mutex rtpoll_trigger_lock;


 struct list_head rtpoll_triggers;
 u32 rtpoll_nr_triggers[NR_PSI_STATES - 1];
 u32 rtpoll_states;
 u64 rtpoll_min_period;


 u64 rtpoll_total[NR_PSI_STATES - 1];
 u64 rtpoll_next_update;
 u64 rtpoll_until;
};
# 24 "../include/linux/cgroup-defs.h" 2
# 806 "../include/linux/cgroup-defs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_threadgroup_change_begin(struct task_struct *tsk)
{
 do { do { } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_threadgroup_change_end(struct task_struct *tsk) {}
# 873 "../include/linux/cgroup-defs.h"
struct sock_cgroup_data {
};
# 28 "../include/linux/cgroup.h" 2

struct kernel_clone_args;
# 636 "../include/linux/cgroup.h"
struct cgroup_subsys_state;
struct cgroup;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 cgroup_id(const struct cgroup *cgrp) { return 1; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void css_get(struct cgroup_subsys_state *css) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void css_put(struct cgroup_subsys_state *css) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_lock(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_unlock(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cgroup_attach_task_all(struct task_struct *from,
      struct task_struct *t) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cgroupstats_build(struct cgroupstats *stats,
        struct dentry *dentry) { return -22; }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_fork(struct task_struct *p) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cgroup_can_fork(struct task_struct *p,
      struct kernel_clone_args *kargs) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_cancel_fork(struct task_struct *p,
          struct kernel_clone_args *kargs) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_post_fork(struct task_struct *p,
        struct kernel_clone_args *kargs) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_exit(struct task_struct *p) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_release(struct task_struct *p) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_free(struct task_struct *p) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cgroup_init_early(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cgroup_init(void) { return 0; }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_init_kthreadd(void) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_kthread_ready(void) {}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cgroup *cgroup_parent(struct cgroup *cgrp)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cgroup_psi_enabled(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_under_cgroup_hierarchy(struct task_struct *task,
            struct cgroup *ancestor)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen)
{}
# 737 "../include/linux/cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_account_cputime(struct task_struct *task,
       u64 delta_exec) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_account_cputime_field(struct task_struct *task,
      enum cpu_usage_stat index,
      u64 delta_exec) {}
# 762 "../include/linux/cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_sk_alloc(struct sock_cgroup_data *skcd) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_sk_clone(struct sock_cgroup_data *skcd) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_sk_free(struct sock_cgroup_data *skcd) {}



struct cgroup_namespace {
 struct ns_common ns;
 struct user_namespace *user_ns;
 struct ucounts *ucounts;
 struct css_set *root_cset;
};

extern struct cgroup_namespace init_cgroup_ns;
# 790 "../include/linux/cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void free_cgroup_ns(struct cgroup_namespace *ns) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct cgroup_namespace *
copy_cgroup_ns(unsigned long flags, struct user_namespace *user_ns,
        struct cgroup_namespace *old_ns)
{
 return old_ns;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void get_cgroup_ns(struct cgroup_namespace *ns)
{
 if (ns)
  refcount_inc(&ns->ns.count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_cgroup_ns(struct cgroup_namespace *ns)
{
 if (ns && refcount_dec_and_test(&ns->ns.count))
  free_cgroup_ns(ns);
}
# 828 "../include/linux/cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_enter_frozen(void) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_leave_frozen(bool always_leave) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool cgroup_task_frozen(struct task_struct *task)
{
 return false;
}
# 850 "../include/linux/cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_bpf_get(struct cgroup *cgrp) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_bpf_put(struct cgroup *cgrp) {}



struct cgroup *task_get_cgroup1(struct task_struct *tsk, int hierarchy_id);
# 14 "../include/linux/memcontrol.h" 2




# 1 "../include/linux/page_counter.h" 1
# 10 "../include/linux/page_counter.h"
struct page_counter {




 atomic_long_t usage;
                          ;


 unsigned long emin;
 atomic_long_t min_usage;
 atomic_long_t children_min_usage;


 unsigned long elow;
 atomic_long_t low_usage;
 atomic_long_t children_low_usage;

 unsigned long watermark;
 unsigned long failcnt;


                          ;

 unsigned long min;
 unsigned long low;
 unsigned long high;
 unsigned long max;
 struct page_counter *parent;
} ;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_counter_init(struct page_counter *counter,
         struct page_counter *parent)
{
 atomic_long_set(&counter->usage, 0);
 counter->max = ((long)(~0UL >> 1));
 counter->parent = parent;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long page_counter_read(struct page_counter *counter)
{
 return atomic_long_read(&counter->usage);
}

void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages);
void page_counter_charge(struct page_counter *counter, unsigned long nr_pages);
bool page_counter_try_charge(struct page_counter *counter,
        unsigned long nr_pages,
        struct page_counter **fail);
void page_counter_uncharge(struct page_counter *counter, unsigned long nr_pages);
void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages);
void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_counter_set_high(struct page_counter *counter,
      unsigned long nr_pages)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_319(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(counter->high) == sizeof(char) || sizeof(counter->high) == sizeof(short) || sizeof(counter->high) == sizeof(int) || sizeof(counter->high) == sizeof(long)) || sizeof(counter->high) == sizeof(long long))) __compiletime_assert_319(); } while (0); do { *(volatile typeof(counter->high) *)&(counter->high) = (nr_pages); } while (0); } while (0);
}

int page_counter_set_max(struct page_counter *counter, unsigned long nr_pages);
int page_counter_memparse(const char *buf, const char *max,
     unsigned long *nr_pages);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void page_counter_reset_watermark(struct page_counter *counter)
{
 counter->watermark = page_counter_read(counter);
}

void page_counter_calculate_protection(struct page_counter *root,
           struct page_counter *counter,
           bool recursive_protection);
# 19 "../include/linux/memcontrol.h" 2
# 1 "../include/linux/vmpressure.h" 1
# 11 "../include/linux/vmpressure.h"
# 1 "../include/linux/eventfd.h" 1
# 17 "../include/linux/eventfd.h"
# 1 "../include/uapi/linux/eventfd.h" 1
# 18 "../include/linux/eventfd.h" 2
# 29 "../include/linux/eventfd.h"
struct eventfd_ctx;
struct file;
# 55 "../include/linux/eventfd.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct eventfd_ctx *eventfd_ctx_fdget(int fd)
{
 return ERR_PTR(-38);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void eventfd_signal_mask(struct eventfd_ctx *ctx, __poll_t mask)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void eventfd_ctx_put(struct eventfd_ctx *ctx)
{

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx,
      wait_queue_entry_t *wait, __u64 *cnt)
{
 return -38;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool eventfd_signal_allowed(void)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
{

}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void eventfd_signal(struct eventfd_ctx *ctx)
{
 eventfd_signal_mask(ctx, 0);
}
# 12 "../include/linux/vmpressure.h" 2

struct vmpressure {
 unsigned long scanned;
 unsigned long reclaimed;

 unsigned long tree_scanned;
 unsigned long tree_reclaimed;

 spinlock_t sr_lock;


 struct list_head events;

 struct mutex events_lock;

 struct work_struct work;
};

struct mem_cgroup;
# 47 "../include/linux/vmpressure.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
         unsigned long scanned, unsigned long reclaimed) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void vmpressure_prio(gfp_t gfp, struct mem_cgroup *memcg,
       int prio) {}
# 20 "../include/linux/memcontrol.h" 2



# 1 "../include/linux/writeback.h" 1
# 11 "../include/linux/writeback.h"
# 1 "../include/linux/flex_proportions.h" 1
# 28 "../include/linux/flex_proportions.h"
struct fprop_global {

 struct percpu_counter events;

 unsigned int period;

 seqcount_t sequence;
};

int fprop_global_init(struct fprop_global *p, gfp_t gfp);
void fprop_global_destroy(struct fprop_global *p);
bool fprop_new_period(struct fprop_global *p, int periods);




struct fprop_local_percpu {

 struct percpu_counter events;

 unsigned int period;
 raw_spinlock_t lock;
};

int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp);
void fprop_local_destroy_percpu(struct fprop_local_percpu *pl);
void __fprop_add_percpu(struct fprop_global *p, struct fprop_local_percpu *pl,
  long nr);
void __fprop_add_percpu_max(struct fprop_global *p,
  struct fprop_local_percpu *pl, int max_frac, long nr);
void fprop_fraction_percpu(struct fprop_global *p,
 struct fprop_local_percpu *pl, unsigned long *numerator,
 unsigned long *denominator);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl)
{
 unsigned long flags;

 do { do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); flags = arch_local_irq_save(); } while (0); if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_off(); } while (0);
 __fprop_add_percpu(p, pl, 1);
 do { if (!({ ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); arch_irqs_disabled_flags(flags); })) trace_hardirqs_on(); do { ({ unsigned long __dummy; typeof(flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(flags); } while (0); } while (0);
}
# 12 "../include/linux/writeback.h" 2
# 1 "../include/linux/backing-dev-defs.h" 1
# 17 "../include/linux/backing-dev-defs.h"
struct page;
struct device;
struct dentry;




enum wb_state {
 WB_registered,
 WB_writeback_running,
 WB_has_dirty_io,
 WB_start_all,
};

enum wb_stat_item {
 WB_RECLAIMABLE,
 WB_WRITEBACK,
 WB_DIRTIED,
 WB_WRITTEN,
 NR_WB_STAT_ITEMS
};






enum wb_reason {
 WB_REASON_BACKGROUND,
 WB_REASON_VMSCAN,
 WB_REASON_SYNC,
 WB_REASON_PERIODIC,
 WB_REASON_LAPTOP_TIMER,
 WB_REASON_FS_FREE_SPACE,






 WB_REASON_FORKER_THREAD,
 WB_REASON_FOREIGN_FLUSH,

 WB_REASON_MAX,
};

struct wb_completion {
 atomic_t cnt;
 wait_queue_head_t *waitq;
};
# 105 "../include/linux/backing-dev-defs.h"
struct bdi_writeback {
 struct backing_dev_info *bdi;

 unsigned long state;
 unsigned long last_old_flush;

 struct list_head b_dirty;
 struct list_head b_io;
 struct list_head b_more_io;
 struct list_head b_dirty_time;
 spinlock_t list_lock;

 atomic_t writeback_inodes;
 struct percpu_counter stat[NR_WB_STAT_ITEMS];

 unsigned long bw_time_stamp;
 unsigned long dirtied_stamp;
 unsigned long written_stamp;
 unsigned long write_bandwidth;
 unsigned long avg_write_bandwidth;







 unsigned long dirty_ratelimit;
 unsigned long balanced_dirty_ratelimit;

 struct fprop_local_percpu completions;
 int dirty_exceeded;
 enum wb_reason start_all_reason;

 spinlock_t work_lock;
 struct list_head work_list;
 struct delayed_work dwork;
 struct delayed_work bw_dwork;

 struct list_head bdi_node;
# 161 "../include/linux/backing-dev-defs.h"
};

struct backing_dev_info {
 u64 id;
 struct rb_node rb_node;
 struct list_head bdi_list;
 unsigned long ra_pages;
 unsigned long io_pages;

 struct kref refcnt;
 unsigned int capabilities;
 unsigned int min_ratio;
 unsigned int max_ratio, max_prop_frac;





 atomic_long_t tot_write_bandwidth;




 unsigned long last_bdp_sleep;

 struct bdi_writeback wb;
 struct list_head wb_list;





 wait_queue_head_t wb_waitq;

 struct device *dev;
 char dev_name[64];
 struct device *owner;

 struct timer_list laptop_mode_wb_timer;


 struct dentry *debug_dir;

};

struct wb_lock_cookie {
 bool locked;
 unsigned long flags;
};
# 275 "../include/linux/backing-dev-defs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool wb_tryget(struct bdi_writeback *wb)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wb_get(struct bdi_writeback *wb)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wb_put(struct bdi_writeback *wb)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wb_put_many(struct bdi_writeback *wb, unsigned long nr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool wb_dying(struct bdi_writeback *wb)
{
 return false;
}
# 13 "../include/linux/writeback.h" 2
# 1 "../include/linux/blk_types.h" 1
# 15 "../include/linux/blk_types.h"
struct bio_set;
struct bio;
struct bio_integrity_payload;
struct page;
struct io_context;
struct cgroup_subsys_state;
typedef void (bio_end_io_t) (struct bio *);
struct bio_crypt_ctx;
# 41 "../include/linux/blk_types.h"
struct block_device {
 sector_t bd_start_sect;
 sector_t bd_nr_sectors;
 struct gendisk * bd_disk;
 struct request_queue * bd_queue;
 struct disk_stats *bd_stats;
 unsigned long bd_stamp;
 atomic_t __bd_flags;
# 57 "../include/linux/blk_types.h"
 dev_t bd_dev;
 struct address_space *bd_mapping;

 atomic_t bd_openers;
 spinlock_t bd_size_lock;
 void * bd_claiming;
 void * bd_holder;
 const struct blk_holder_ops *bd_holder_ops;
 struct mutex bd_holder_lock;
 int bd_holders;
 struct kobject *bd_holder_dir;

 atomic_t bd_fsfreeze_count;
 struct mutex bd_fsfreeze_mutex;

 struct partition_meta_info *bd_meta_info;
 int bd_writers;




 struct device bd_device;
} ;
# 93 "../include/linux/blk_types.h"
typedef u8 blk_status_t;
typedef u16 blk_short_t;
# 182 "../include/linux/blk_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool blk_path_error(blk_status_t error)
{
 switch (error) {
 case (( blk_status_t)1):
 case (( blk_status_t)3):
 case (( blk_status_t)5):
 case (( blk_status_t)6):
 case (( blk_status_t)7):
 case (( blk_status_t)8):
  return false;
 }


 return true;
}

struct bio_issue {
 u64 value;
};

typedef __u32 blk_opf_t;

typedef unsigned int blk_qc_t;






struct bio {
 struct bio *bi_next;
 struct block_device *bi_bdev;
 blk_opf_t bi_opf;


 unsigned short bi_flags;
 unsigned short bi_ioprio;
 enum rw_hint bi_write_hint;
 blk_status_t bi_status;
 atomic_t __bi_remaining;

 struct bvec_iter bi_iter;

 union {

  blk_qc_t bi_cookie;

  unsigned int __bi_nr_segments;
 };
 bio_end_io_t *bi_end_io;
 void *bi_private;
# 251 "../include/linux/blk_types.h"
 union {



 };

 unsigned short bi_vcnt;





 unsigned short bi_max_vecs;

 atomic_t __bi_cnt;

 struct bio_vec *bi_io_vec;

 struct bio_set *bi_pool;






 struct bio_vec bi_inline_vecs[];
};







enum {
 BIO_PAGE_PINNED,
 BIO_CLONED,
 BIO_BOUNCED,
 BIO_QUIET,
 BIO_CHAIN,
 BIO_REFFED,
 BIO_BPS_THROTTLED,

 BIO_TRACE_COMPLETION,

 BIO_CGROUP_ACCT,
 BIO_QOS_THROTTLED,
 BIO_QOS_MERGED,
 BIO_REMAPPED,
 BIO_ZONE_WRITE_PLUGGING,
 BIO_EMULATES_ZONE_APPEND,
 BIO_FLAG_LAST
};

typedef __u32 blk_mq_req_flags_t;
# 324 "../include/linux/blk_types.h"
enum req_op {

 REQ_OP_READ = ( blk_opf_t)0,

 REQ_OP_WRITE = ( blk_opf_t)1,

 REQ_OP_FLUSH = ( blk_opf_t)2,

 REQ_OP_DISCARD = ( blk_opf_t)3,

 REQ_OP_SECURE_ERASE = ( blk_opf_t)5,

 REQ_OP_ZONE_APPEND = ( blk_opf_t)7,

 REQ_OP_WRITE_ZEROES = ( blk_opf_t)9,

 REQ_OP_ZONE_OPEN = ( blk_opf_t)10,

 REQ_OP_ZONE_CLOSE = ( blk_opf_t)11,

 REQ_OP_ZONE_FINISH = ( blk_opf_t)12,

 REQ_OP_ZONE_RESET = ( blk_opf_t)13,

 REQ_OP_ZONE_RESET_ALL = ( blk_opf_t)15,


 REQ_OP_DRV_IN = ( blk_opf_t)34,
 REQ_OP_DRV_OUT = ( blk_opf_t)35,

 REQ_OP_LAST = ( blk_opf_t)36,
};


enum req_flag_bits {
 __REQ_FAILFAST_DEV =
  8,
 __REQ_FAILFAST_TRANSPORT,
 __REQ_FAILFAST_DRIVER,
 __REQ_SYNC,
 __REQ_META,
 __REQ_PRIO,
 __REQ_NOMERGE,
 __REQ_IDLE,
 __REQ_INTEGRITY,
 __REQ_FUA,
 __REQ_PREFLUSH,
 __REQ_RAHEAD,
 __REQ_BACKGROUND,
 __REQ_NOWAIT,
 __REQ_POLLED,
 __REQ_ALLOC_CACHE,
 __REQ_SWAP,
 __REQ_DRV,
 __REQ_FS_PRIVATE,
 __REQ_ATOMIC,




 __REQ_NOUNMAP,

 __REQ_NR_BITS,
};
# 421 "../include/linux/blk_types.h"
enum stat_group {
 STAT_READ,
 STAT_WRITE,
 STAT_DISCARD,
 STAT_FLUSH,

 NR_STAT_GROUPS
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum req_op bio_op(const struct bio *bio)
{
 return bio->bi_opf & ( blk_opf_t)((1 << 8) - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool op_is_write(blk_opf_t op)
{
 return !!(op & ( blk_opf_t)1);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool op_is_flush(blk_opf_t op)
{
 return op & (( blk_opf_t)(1ULL << __REQ_FUA) | ( blk_opf_t)(1ULL << __REQ_PREFLUSH));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool op_is_sync(blk_opf_t op)
{
 return (op & ( blk_opf_t)((1 << 8) - 1)) == REQ_OP_READ ||
  (op & (( blk_opf_t)(1ULL << __REQ_SYNC) | ( blk_opf_t)(1ULL << __REQ_FUA) | ( blk_opf_t)(1ULL << __REQ_PREFLUSH)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool op_is_discard(blk_opf_t op)
{
 return (op & ( blk_opf_t)((1 << 8) - 1)) == REQ_OP_DISCARD;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool op_is_zone_mgmt(enum req_op op)
{
 switch (op & ( blk_opf_t)((1 << 8) - 1)) {
 case REQ_OP_ZONE_RESET:
 case REQ_OP_ZONE_OPEN:
 case REQ_OP_ZONE_CLOSE:
 case REQ_OP_ZONE_FINISH:
  return true;
 default:
  return false;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int op_stat_group(enum req_op op)
{
 if (op_is_discard(op))
  return STAT_DISCARD;
 return op_is_write(op);
}

struct blk_rq_stat {
 u64 mean;
 u64 min;
 u64 max;
 u32 nr_samples;
 u64 batch;
};
# 14 "../include/linux/writeback.h" 2
# 1 "../include/linux/pagevec.h" 1
# 17 "../include/linux/pagevec.h"
struct folio;
# 28 "../include/linux/pagevec.h"
struct folio_batch {
 unsigned char nr;
 unsigned char i;
 bool percpu_pvec_drained;
 struct folio *folios[31];
};







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_batch_init(struct folio_batch *fbatch)
{
 fbatch->nr = 0;
 fbatch->i = 0;
 fbatch->percpu_pvec_drained = false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_batch_reinit(struct folio_batch *fbatch)
{
 fbatch->nr = 0;
 fbatch->i = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int folio_batch_count(struct folio_batch *fbatch)
{
 return fbatch->nr;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int folio_batch_space(struct folio_batch *fbatch)
{
 return 31 - fbatch->nr;
}
# 74 "../include/linux/pagevec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned folio_batch_add(struct folio_batch *fbatch,
  struct folio *folio)
{
 fbatch->folios[fbatch->nr++] = folio;
 return folio_batch_space(fbatch);
}
# 89 "../include/linux/pagevec.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct folio *folio_batch_next(struct folio_batch *fbatch)
{
 if (fbatch->i == fbatch->nr)
  return ((void *)0);
 return fbatch->folios[fbatch->i++];
}

void __folio_batch_release(struct folio_batch *pvec);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_batch_release(struct folio_batch *fbatch)
{
 if (folio_batch_count(fbatch))
  __folio_batch_release(fbatch);
}

void folio_batch_remove_exceptionals(struct folio_batch *fbatch);
# 15 "../include/linux/writeback.h" 2

struct bio;

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_dirty_throttle_leaks; extern __attribute__((section(".data" ""))) __typeof__(int) dirty_throttle_leaks;
# 28 "../include/linux/writeback.h"
struct backing_dev_info;




enum writeback_sync_modes {
 WB_SYNC_NONE,
 WB_SYNC_ALL,
};






struct writeback_control {

 long nr_to_write;

 long pages_skipped;






 loff_t range_start;
 loff_t range_end;

 enum writeback_sync_modes sync_mode;

 unsigned for_kupdate:1;
 unsigned for_background:1;
 unsigned tagged_writepages:1;
 unsigned for_reclaim:1;
 unsigned range_cyclic:1;
 unsigned for_sync:1;
 unsigned unpinned_netfs_wb:1;







 unsigned no_cgroup_owner:1;






 struct swap_iocb **swap_plug;


 struct folio_batch fbatch;
 unsigned long index;
 int saved_err;
# 99 "../include/linux/writeback.h"
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) blk_opf_t wbc_to_write_flags(struct writeback_control *wbc)
{
 blk_opf_t flags = 0;

 if (wbc->sync_mode == WB_SYNC_ALL)
  flags |= ( blk_opf_t)(1ULL << __REQ_SYNC);
 else if (wbc->for_kupdate || wbc->for_background)
  flags |= ( blk_opf_t)(1ULL << __REQ_BACKGROUND);

 return flags;
}
# 127 "../include/linux/writeback.h"
struct wb_domain {
 spinlock_t lock;
# 147 "../include/linux/writeback.h"
 struct fprop_global completions;
 struct timer_list period_timer;
 unsigned long period_time;
# 161 "../include/linux/writeback.h"
 unsigned long dirty_limit_tstamp;
 unsigned long dirty_limit;
};
# 177 "../include/linux/writeback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wb_domain_size_changed(struct wb_domain *dom)
{
 spin_lock(&dom->lock);
 dom->dirty_limit_tstamp = jiffies;
 dom->dirty_limit = 0;
 spin_unlock(&dom->lock);
}




struct bdi_writeback;
void writeback_inodes_sb(struct super_block *, enum wb_reason reason);
void writeback_inodes_sb_nr(struct super_block *, unsigned long nr,
       enum wb_reason reason);
void try_to_writeback_inodes_sb(struct super_block *sb, enum wb_reason reason);
void sync_inodes_sb(struct super_block *);
void wakeup_flusher_threads(enum wb_reason reason);
void wakeup_flusher_threads_bdi(struct backing_dev_info *bdi,
    enum wb_reason reason);
void inode_wait_for_writeback(struct inode *inode);
void inode_io_list_del(struct inode *inode);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wait_on_inode(struct inode *inode)
{
 wait_on_bit(&inode->i_state, 3, 0x00000002);
}
# 294 "../include/linux/writeback.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_attach_wb(struct inode *inode, struct folio *folio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inode_detach_wb(struct inode *inode)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
            struct inode *inode)

{
 spin_unlock(&inode->i_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
            struct inode *inode)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wbc_detach_inode(struct writeback_control *wbc)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void wbc_account_cgroup_owner(struct writeback_control *wbc,
         struct page *page, size_t bytes)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void cgroup_writeback_umount(void)
{
}






void laptop_io_completion(struct backing_dev_info *info);
void laptop_sync_completion(void);
void laptop_mode_timer_fn(struct timer_list *t);
bool node_dirty_ok(struct pglist_data *pgdat);
int wb_domain_init(struct wb_domain *dom, gfp_t gfp);




extern struct wb_domain global_wb_domain;


extern unsigned int dirty_writeback_interval;
extern unsigned int dirty_expire_interval;
extern unsigned int dirtytime_expire_interval;
extern int laptop_mode;

int dirtytime_interval_handler(const struct ctl_table *table, int write,
  void *buffer, size_t *lenp, loff_t *ppos);

void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty);
unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thresh);
unsigned long cgwb_calc_thresh(struct bdi_writeback *wb);

void wb_update_bandwidth(struct bdi_writeback *wb);




void balance_dirty_pages_ratelimited(struct address_space *mapping);
int balance_dirty_pages_ratelimited_flags(struct address_space *mapping,
  unsigned int flags);

bool wb_over_bg_thresh(struct bdi_writeback *wb);

struct folio *writeback_iter(struct address_space *mapping,
  struct writeback_control *wbc, struct folio *folio, int *error);

typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc,
    void *data);

int write_cache_pages(struct address_space *mapping,
        struct writeback_control *wbc, writepage_t writepage,
        void *data);
int do_writepages(struct address_space *mapping, struct writeback_control *wbc);
void writeback_set_ratelimit(void);
void tag_pages_for_writeback(struct address_space *mapping,
        unsigned long start, unsigned long end);

bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio);
bool folio_redirty_for_writepage(struct writeback_control *, struct folio *);
bool redirty_page_for_writepage(struct writeback_control *, struct page *);

void sb_mark_inode_writeback(struct inode *inode);
void sb_clear_inode_writeback(struct inode *inode);
# 24 "../include/linux/memcontrol.h" 2



struct mem_cgroup;
struct obj_cgroup;
struct page;
struct mm_struct;
struct kmem_cache;


enum memcg_stat_item {
 MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
 MEMCG_SOCK,
 MEMCG_PERCPU_B,
 MEMCG_VMALLOC,
 MEMCG_KMEM,
 MEMCG_ZSWAP_B,
 MEMCG_ZSWAPPED,
 MEMCG_NR_STAT,
};

enum memcg_memory_event {
 MEMCG_LOW,
 MEMCG_HIGH,
 MEMCG_MAX,
 MEMCG_OOM,
 MEMCG_OOM_KILL,
 MEMCG_OOM_GROUP_KILL,
 MEMCG_SWAP_HIGH,
 MEMCG_SWAP_MAX,
 MEMCG_SWAP_FAIL,
 MEMCG_NR_MEMORY_EVENTS,
};

struct mem_cgroup_reclaim_cookie {
 pg_data_t *pgdat;
 unsigned int generation;
};
# 347 "../include/linux/memcontrol.h"
enum objext_flags {

 OBJEXTS_ALLOC_FAIL = (1UL << 0),

 __NR_OBJEXTS_FLAGS = ((1UL << 0) << 1),
};
# 1070 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *folio_memcg(struct folio *folio)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
{
 ({ bool __ret_do_once = !!(!rcu_read_lock_held()); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/memcontrol.h", 1077, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *folio_memcg_check(struct folio *folio)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *page_memcg_check(struct page *page)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_memcg_kmem(struct folio *folio)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool PageMemcgKmem(struct page *page)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_is_root(struct mem_cgroup *memcg)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_disabled(void)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcg_memory_event(struct mem_cgroup *memcg,
          enum memcg_memory_event event)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcg_memory_event_mm(struct mm_struct *mm,
      enum memcg_memory_event event)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_protection(struct mem_cgroup *root,
      struct mem_cgroup *memcg,
      unsigned long *min,
      unsigned long *low)
{
 *min = *low = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_calculate_protection(struct mem_cgroup *root,
         struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_unprotected(struct mem_cgroup *target,
       struct mem_cgroup *memcg)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_below_low(struct mem_cgroup *target,
     struct mem_cgroup *memcg)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_below_min(struct mem_cgroup *target,
     struct mem_cgroup *memcg)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_commit_charge(struct folio *folio,
  struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mem_cgroup_charge(struct folio *folio,
  struct mm_struct *mm, gfp_t gfp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg,
  gfp_t gfp, long nr_pages)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mem_cgroup_swapin_charge_folio(struct folio *folio,
   struct mm_struct *mm, gfp_t gfp, swp_entry_t entry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_uncharge(struct folio *folio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_uncharge_folios(struct folio_batch *folios)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_cancel_charge(struct mem_cgroup *memcg,
  unsigned int nr_pages)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_replace_folio(struct folio *old,
  struct folio *new)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_migrate(struct folio *old, struct folio *new)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
            struct pglist_data *pgdat)
{
 return &pgdat->__lruvec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *folio_lruvec(struct folio *folio)
{
 struct pglist_data *pgdat = folio_pgdat(folio);
 return &pgdat->__lruvec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mm_match_cgroup(struct mm_struct *mm,
  struct mem_cgroup *memcg)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *get_mem_cgroup_from_current(void)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void obj_cgroup_put(struct obj_cgroup *objcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_tryget(struct mem_cgroup *memcg)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_put(struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *folio_lruvec_lock(struct folio *folio)
{
 struct pglist_data *pgdat = folio_pgdat(folio);

 spin_lock(&pgdat->__lruvec.lru_lock);
 return &pgdat->__lruvec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
{
 struct pglist_data *pgdat = folio_pgdat(folio);

 spin_lock_irq(&pgdat->__lruvec.lru_lock);
 return &pgdat->__lruvec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
  unsigned long *flagsp)
{
 struct pglist_data *pgdat = folio_pgdat(folio);

 do { do { ({ unsigned long __dummy; typeof(*flagsp) __dummy2; (void)(&__dummy == &__dummy2); 1; }); *flagsp = _raw_spin_lock_irqsave(spinlock_check(&pgdat->__lruvec.lru_lock)); } while (0); } while (0);
 return &pgdat->__lruvec;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *
mem_cgroup_iter(struct mem_cgroup *root,
  struct mem_cgroup *prev,
  struct mem_cgroup_reclaim_cookie *reclaim)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_iter_break(struct mem_cgroup *root,
      struct mem_cgroup *prev)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
  int (*fn)(struct task_struct *, void *), void *arg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
{
 ({ bool __ret_do_once = !!(id); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/memcontrol.h", 1317, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 return ((void *)0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long mem_cgroup_ino(struct mem_cgroup *memcg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino)
{
 return ((void *)0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_online(struct mem_cgroup *memcg)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
  enum lru_list lru, int zone_idx)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long mem_cgroup_size(struct mem_cgroup *memcg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_handle_over_high(gfp_t gfp_mask)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_get_oom_group(
 struct task_struct *victim, struct mem_cgroup *oom_domain)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_print_oom_group(struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mod_memcg_state(struct mem_cgroup *memcg,
         enum memcg_stat_item idx,
         int nr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mod_memcg_state(struct mem_cgroup *memcg,
       enum memcg_stat_item idx,
       int nr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mod_memcg_page_state(struct page *page,
     enum memcg_stat_item idx, int val)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long lruvec_page_state(struct lruvec *lruvec,
           enum node_stat_item idx)
{
 return global_node_page_state(idx);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long lruvec_page_state_local(struct lruvec *lruvec,
          enum node_stat_item idx)
{
 return global_node_page_state(idx);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx,
        int val)
{
 struct page *page = virt_to_head_page(p);

 __mod_node_page_state(page_pgdat(page), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mod_lruvec_kmem_state(void *p, enum node_stat_item idx,
      int val)
{
 struct page *page = virt_to_head_page(p);

 __mod_node_page_state(page_pgdat(page), idx, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void count_memcg_events(struct mem_cgroup *memcg,
          enum vm_event_item idx,
          unsigned long count)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __count_memcg_events(struct mem_cgroup *memcg,
     enum vm_event_item idx,
     unsigned long count)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void count_memcg_folio_events(struct folio *folio,
  enum vm_event_item idx, unsigned long nr)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void split_page_memcg(struct page *head, int old_order, int new_order)
{
}






struct slabobj_ext {






} __attribute__((__aligned__(8)));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx)
{
 __mod_lruvec_kmem_state(p, idx, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dec_lruvec_kmem_state(void *p, enum node_stat_item idx)
{
 __mod_lruvec_kmem_state(p, idx, -1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *parent_lruvec(struct lruvec *lruvec)
{
 struct mem_cgroup *memcg;

 memcg = lruvec_memcg(lruvec);
 if (!memcg)
  return ((void *)0);
 memcg = parent_mem_cgroup(memcg);
 if (!memcg)
  return ((void *)0);
 return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unlock_page_lruvec(struct lruvec *lruvec)
{
 spin_unlock(&lruvec->lru_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unlock_page_lruvec_irq(struct lruvec *lruvec)
{
 spin_unlock_irq(&lruvec->lru_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unlock_page_lruvec_irqrestore(struct lruvec *lruvec,
  unsigned long flags)
{
 spin_unlock_irqrestore(&lruvec->lru_lock, flags);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool folio_matches_lruvec(struct folio *folio,
  struct lruvec *lruvec)
{
 return lruvec_pgdat(lruvec) == folio_pgdat(folio) &&
        lruvec_memcg(lruvec) == folio_memcg(folio);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lruvec *folio_lruvec_relock_irq(struct folio *folio,
  struct lruvec *locked_lruvec)
{
 if (locked_lruvec) {
  if (folio_matches_lruvec(folio, locked_lruvec))
   return locked_lruvec;

  unlock_page_lruvec_irq(locked_lruvec);
 }

 return folio_lruvec_lock_irq(folio);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_lruvec_relock_irqsave(struct folio *folio,
  struct lruvec **lruvecp, unsigned long *flags)
{
 if (*lruvecp) {
  if (folio_matches_lruvec(folio, *lruvecp))
   return;

  unlock_page_lruvec_irqrestore(*lruvecp, *flags);
 }

 *lruvecp = folio_lruvec_lock_irqsave(folio, flags);
}
# 1590 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_wb_stats(struct bdi_writeback *wb,
           unsigned long *pfilepages,
           unsigned long *pheadroom,
           unsigned long *pdirty,
           unsigned long *pwriteback)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_track_foreign_dirty(struct folio *folio,
        struct bdi_writeback *wb)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_flush_foreign(struct bdi_writeback *wb)
{
}



struct sock;
bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
        gfp_t gfp_mask);
void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages);
# 1642 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_sk_alloc(struct sock *sk) { };
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_sk_free(struct sock *sk) { };
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void set_shrinker_bit(struct mem_cgroup *memcg,
        int nid, int shrinker_id)
{
}
# 1738 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_kmem_disabled(void)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int memcg_kmem_charge_page(struct page *page, gfp_t gfp,
      int order)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void memcg_kmem_uncharge_page(struct page *page, int order)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __memcg_kmem_charge_page(struct page *page, gfp_t gfp,
        int order)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __memcg_kmem_uncharge_page(struct page *page, int order)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct obj_cgroup *get_obj_cgroup_from_folio(struct folio *folio)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool memcg_bpf_enabled(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool memcg_kmem_online(void)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int memcg_kmem_id(struct mem_cgroup *memcg)
{
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_from_obj(void *p)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct mem_cgroup *mem_cgroup_from_slab_obj(void *p)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void count_objcg_event(struct obj_cgroup *objcg,
         enum vm_event_item idx)
{
}
# 1806 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
{
 return true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void obj_cgroup_charge_zswap(struct obj_cgroup *objcg,
        size_t size)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg,
          size_t size)
{
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg)
{

 return true;
}
# 1873 "../include/linux/memcontrol.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
unsigned long memcg1_soft_limit_reclaim(pg_data_t *pgdat, int order,
     gfp_t gfp_mask,
     unsigned long *total_scanned)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_memcg_lock(struct folio *folio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void folio_memcg_unlock(struct folio *folio)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg)
{

 rcu_read_lock();
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_unlock_pages(void)
{
 rcu_read_unlock();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_in_memcg_oom(struct task_struct *p)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool mem_cgroup_oom_synchronize(bool wait)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_enter_user_fault(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mem_cgroup_exit_user_fault(void)
{
}
# 32 "../include/linux/bpf.h" 2
# 1 "../include/linux/cfi.h" 1
# 12 "../include/linux/cfi.h"
# 1 "./arch/hexagon/include/generated/asm/cfi.h" 1
# 1 "../include/asm-generic/cfi.h" 1
# 2 "./arch/hexagon/include/generated/asm/cfi.h" 2
# 13 "../include/linux/cfi.h" 2


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int cfi_get_offset(void)
{
 return 0;
}
# 35 "../include/linux/cfi.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_cfi_trap(unsigned long addr) { return false; }
# 33 "../include/linux/bpf.h" 2

struct bpf_verifier_env;
struct bpf_verifier_log;
struct perf_event;
struct bpf_prog;
struct bpf_prog_aux;
struct bpf_map;
struct bpf_arena;
struct sock;
struct seq_file;
struct btf;
struct btf_type;
struct exception_table_entry;
struct seq_operations;
struct bpf_iter_aux_info;
struct bpf_local_storage;
struct bpf_local_storage_map;
struct kobject;
struct mem_cgroup;
struct module;
struct bpf_func_state;
struct ftrace_ops;
struct cgroup;
struct bpf_token;
struct user_namespace;
struct super_block;
struct inode;

extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
extern struct kobject *btf_kobj;
extern struct bpf_mem_alloc bpf_global_ma, bpf_global_percpu_ma;
extern bool bpf_global_ma_set;

typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64);
typedef int (*bpf_iter_init_seq_priv_t)(void *private_data,
     struct bpf_iter_aux_info *aux);
typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data);
typedef unsigned int (*bpf_func_t)(const void *,
       const struct bpf_insn *);
struct bpf_iter_seq_info {
 const struct seq_operations *seq_ops;
 bpf_iter_init_seq_priv_t init_seq_private;
 bpf_iter_fini_seq_priv_t fini_seq_private;
 u32 seq_priv_size;
};


struct bpf_map_ops {

 int (*map_alloc_check)(union bpf_attr *attr);
 struct bpf_map *(*map_alloc)(union bpf_attr *attr);
 void (*map_release)(struct bpf_map *map, struct file *map_file);
 void (*map_free)(struct bpf_map *map);
 int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key);
 void (*map_release_uref)(struct bpf_map *map);
 void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key);
 int (*map_lookup_batch)(struct bpf_map *map, const union bpf_attr *attr,
    union bpf_attr *uattr);
 int (*map_lookup_and_delete_elem)(struct bpf_map *map, void *key,
       void *value, u64 flags);
 int (*map_lookup_and_delete_batch)(struct bpf_map *map,
        const union bpf_attr *attr,
        union bpf_attr *uattr);
 int (*map_update_batch)(struct bpf_map *map, struct file *map_file,
    const union bpf_attr *attr,
    union bpf_attr *uattr);
 int (*map_delete_batch)(struct bpf_map *map, const union bpf_attr *attr,
    union bpf_attr *uattr);


 void *(*map_lookup_elem)(struct bpf_map *map, void *key);
 long (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags);
 long (*map_delete_elem)(struct bpf_map *map, void *key);
 long (*map_push_elem)(struct bpf_map *map, void *value, u64 flags);
 long (*map_pop_elem)(struct bpf_map *map, void *value);
 long (*map_peek_elem)(struct bpf_map *map, void *value);
 void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu);


 void *(*map_fd_get_ptr)(struct bpf_map *map, struct file *map_file,
    int fd);




 void (*map_fd_put_ptr)(struct bpf_map *map, void *ptr, bool need_defer);
 int (*map_gen_lookup)(struct bpf_map *map, struct bpf_insn *insn_buf);
 u32 (*map_fd_sys_lookup_elem)(void *ptr);
 void (*map_seq_show_elem)(struct bpf_map *map, void *key,
      struct seq_file *m);
 int (*map_check_btf)(const struct bpf_map *map,
        const struct btf *btf,
        const struct btf_type *key_type,
        const struct btf_type *value_type);


 int (*map_poke_track)(struct bpf_map *map, struct bpf_prog_aux *aux);
 void (*map_poke_untrack)(struct bpf_map *map, struct bpf_prog_aux *aux);
 void (*map_poke_run)(struct bpf_map *map, u32 key, struct bpf_prog *old,
        struct bpf_prog *new);


 int (*map_direct_value_addr)(const struct bpf_map *map,
         u64 *imm, u32 off);
 int (*map_direct_value_meta)(const struct bpf_map *map,
         u64 imm, u32 *off);
 int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
 __poll_t (*map_poll)(struct bpf_map *map, struct file *filp,
        struct poll_table_struct *pts);
 unsigned long (*map_get_unmapped_area)(struct file *filep, unsigned long addr,
            unsigned long len, unsigned long pgoff,
            unsigned long flags);


 int (*map_local_storage_charge)(struct bpf_local_storage_map *smap,
     void *owner, u32 size);
 void (*map_local_storage_uncharge)(struct bpf_local_storage_map *smap,
        void *owner, u32 size);
 struct bpf_local_storage ** (*map_owner_storage_ptr)(void *owner);


 long (*map_redirect)(struct bpf_map *map, u64 key, u64 flags);
# 166 "../include/linux/bpf.h"
 bool (*map_meta_equal)(const struct bpf_map *meta0,
          const struct bpf_map *meta1);


 int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env,
           struct bpf_func_state *caller,
           struct bpf_func_state *callee);
 long (*map_for_each_callback)(struct bpf_map *map,
         bpf_callback_t callback_fn,
         void *callback_ctx, u64 flags);

 u64 (*map_mem_usage)(const struct bpf_map *map);


 int *map_btf_id;


 const struct bpf_iter_seq_info *iter_seq_info;
};

enum {

 BTF_FIELDS_MAX = 11,
};

enum btf_field_type {
 BPF_SPIN_LOCK = (1 << 0),
 BPF_TIMER = (1 << 1),
 BPF_KPTR_UNREF = (1 << 2),
 BPF_KPTR_REF = (1 << 3),
 BPF_KPTR_PERCPU = (1 << 4),
 BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF | BPF_KPTR_PERCPU,
 BPF_LIST_HEAD = (1 << 5),
 BPF_LIST_NODE = (1 << 6),
 BPF_RB_ROOT = (1 << 7),
 BPF_RB_NODE = (1 << 8),
 BPF_GRAPH_NODE = BPF_RB_NODE | BPF_LIST_NODE,
 BPF_GRAPH_ROOT = BPF_RB_ROOT | BPF_LIST_HEAD,
 BPF_REFCOUNT = (1 << 9),
 BPF_WORKQUEUE = (1 << 10),
};

typedef void (*btf_dtor_kfunc_t)(void *);

struct btf_field_kptr {
 struct btf *btf;
 struct module *module;



 btf_dtor_kfunc_t dtor;
 u32 btf_id;
};

struct btf_field_graph_root {
 struct btf *btf;
 u32 value_btf_id;
 u32 node_offset;
 struct btf_record *value_rec;
};

struct btf_field {
 u32 offset;
 u32 size;
 enum btf_field_type type;
 union {
  struct btf_field_kptr kptr;
  struct btf_field_graph_root graph_root;
 };
};

struct btf_record {
 u32 cnt;
 u32 field_mask;
 int spin_lock_off;
 int timer_off;
 int wq_off;
 int refcount_off;
 struct btf_field fields[];
};


struct bpf_rb_node_kern {
 struct rb_node rb_node;
 void *owner;
} __attribute__((aligned(8)));


struct bpf_list_node_kern {
 struct list_head list_head;
 void *owner;
} __attribute__((aligned(8)));

struct bpf_map {
 const struct bpf_map_ops *ops;
 struct bpf_map *inner_map_meta;



 enum bpf_map_type map_type;
 u32 key_size;
 u32 value_size;
 u32 max_entries;
 u64 map_extra;
 u32 map_flags;
 u32 id;
 struct btf_record *record;
 int numa_node;
 u32 btf_key_type_id;
 u32 btf_value_type_id;
 u32 btf_vmlinux_value_type_id;
 struct btf *btf;



 char name[16U];
 struct mutex freeze_mutex;
 atomic64_t refcnt;
 atomic64_t usercnt;

 union {
  struct work_struct work;
  struct callback_head rcu;
 };
 atomic64_t writecnt;





 struct {
  spinlock_t lock;
  enum bpf_prog_type type;
  bool jited;
  bool xdp_has_frags;
 } owner;
 bool bypass_spec_v1;
 bool frozen;
 bool free_after_mult_rcu_gp;
 bool free_after_rcu_gp;
 atomic64_t sleepable_refcnt;
 s64 *elem_count;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *btf_field_type_name(enum btf_field_type type)
{
 switch (type) {
 case BPF_SPIN_LOCK:
  return "bpf_spin_lock";
 case BPF_TIMER:
  return "bpf_timer";
 case BPF_WORKQUEUE:
  return "bpf_wq";
 case BPF_KPTR_UNREF:
 case BPF_KPTR_REF:
  return "kptr";
 case BPF_KPTR_PERCPU:
  return "percpu_kptr";
 case BPF_LIST_HEAD:
  return "bpf_list_head";
 case BPF_LIST_NODE:
  return "bpf_list_node";
 case BPF_RB_ROOT:
  return "bpf_rb_root";
 case BPF_RB_NODE:
  return "bpf_rb_node";
 case BPF_REFCOUNT:
  return "bpf_refcount";
 default:
  ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bpf.h", 335, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return "unknown";
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 btf_field_type_size(enum btf_field_type type)
{
 switch (type) {
 case BPF_SPIN_LOCK:
  return sizeof(struct bpf_spin_lock);
 case BPF_TIMER:
  return sizeof(struct bpf_timer);
 case BPF_WORKQUEUE:
  return sizeof(struct bpf_wq);
 case BPF_KPTR_UNREF:
 case BPF_KPTR_REF:
 case BPF_KPTR_PERCPU:
  return sizeof(u64);
 case BPF_LIST_HEAD:
  return sizeof(struct bpf_list_head);
 case BPF_LIST_NODE:
  return sizeof(struct bpf_list_node);
 case BPF_RB_ROOT:
  return sizeof(struct bpf_rb_root);
 case BPF_RB_NODE:
  return sizeof(struct bpf_rb_node);
 case BPF_REFCOUNT:
  return sizeof(struct bpf_refcount);
 default:
  ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bpf.h", 364, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 btf_field_type_align(enum btf_field_type type)
{
 switch (type) {
 case BPF_SPIN_LOCK:
  return __alignof__(struct bpf_spin_lock);
 case BPF_TIMER:
  return __alignof__(struct bpf_timer);
 case BPF_WORKQUEUE:
  return __alignof__(struct bpf_wq);
 case BPF_KPTR_UNREF:
 case BPF_KPTR_REF:
 case BPF_KPTR_PERCPU:
  return __alignof__(u64);
 case BPF_LIST_HEAD:
  return __alignof__(struct bpf_list_head);
 case BPF_LIST_NODE:
  return __alignof__(struct bpf_list_node);
 case BPF_RB_ROOT:
  return __alignof__(struct bpf_rb_root);
 case BPF_RB_NODE:
  return __alignof__(struct bpf_rb_node);
 case BPF_REFCOUNT:
  return __alignof__(struct bpf_refcount);
 default:
  ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bpf.h", 393, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_obj_init_field(const struct btf_field *field, void *addr)
{
 memset(addr, 0, field->size);

 switch (field->type) {
 case BPF_REFCOUNT:
  refcount_set((refcount_t *)addr, 1);
  break;
 case BPF_RB_NODE:
  (((struct rb_node *)addr)->__rb_parent_color = (unsigned long)((struct rb_node *)addr));
  break;
 case BPF_LIST_HEAD:
 case BPF_LIST_NODE:
  INIT_LIST_HEAD((struct list_head *)addr);
  break;
 case BPF_RB_ROOT:

 case BPF_SPIN_LOCK:
 case BPF_TIMER:
 case BPF_WORKQUEUE:
 case BPF_KPTR_UNREF:
 case BPF_KPTR_REF:
 case BPF_KPTR_PERCPU:
  break;
 default:
  ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/bpf.h", 423, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
  return;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool btf_record_has_field(const struct btf_record *rec, enum btf_field_type type)
{
 if (IS_ERR_OR_NULL(rec))
  return false;
 return rec->field_mask & type;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_obj_init(const struct btf_record *rec, void *obj)
{
 int i;

 if (IS_ERR_OR_NULL(rec))
  return;
 for (i = 0; i < rec->cnt; i++)
  bpf_obj_init_field(&rec->fields[i], obj + rec->fields[i].offset);
}
# 452 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void check_and_init_map_value(struct bpf_map *map, void *dst)
{
 bpf_obj_init(map->record, dst);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_long_memcpy(void *dst, const void *src, u32 size)
{
 const long *lsrc = src;
 long *ldst = dst;

 size /= sizeof(long);
 while (size--)
  ({ __kcsan_disable_current(); __auto_type __v = (*ldst++ = *lsrc++); __kcsan_enable_current(); __v; });
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_obj_memcpy(struct btf_record *rec,
      void *dst, void *src, u32 size,
      bool long_memcpy)
{
 u32 curr_off = 0;
 int i;

 if (IS_ERR_OR_NULL(rec)) {
  if (long_memcpy)
   bpf_long_memcpy(dst, src, ((((size)-1) | ((__typeof__(size))((8)-1)))+1));
  else
   memcpy(dst, src, size);
  return;
 }

 for (i = 0; i < rec->cnt; i++) {
  u32 next_off = rec->fields[i].offset;
  u32 sz = next_off - curr_off;

  memcpy(dst + curr_off, src + curr_off, sz);
  curr_off += rec->fields[i].size + sz;
 }
 memcpy(dst + curr_off, src + curr_off, size - curr_off);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_map_value(struct bpf_map *map, void *dst, void *src)
{
 bpf_obj_memcpy(map->record, dst, src, map->value_size, false);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void copy_map_value_long(struct bpf_map *map, void *dst, void *src)
{
 bpf_obj_memcpy(map->record, dst, src, map->value_size, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_obj_memzero(struct btf_record *rec, void *dst, u32 size)
{
 u32 curr_off = 0;
 int i;

 if (IS_ERR_OR_NULL(rec)) {
  memset(dst, 0, size);
  return;
 }

 for (i = 0; i < rec->cnt; i++) {
  u32 next_off = rec->fields[i].offset;
  u32 sz = next_off - curr_off;

  memset(dst + curr_off, 0, sz);
  curr_off += rec->fields[i].size + sz;
 }
 memset(dst + curr_off, 0, size - curr_off);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void zero_map_value(struct bpf_map *map, void *dst)
{
 bpf_obj_memzero(map->record, dst, map->value_size);
}

void copy_map_value_locked(struct bpf_map *map, void *dst, void *src,
      bool lock_src);
void bpf_timer_cancel_and_free(void *timer);
void bpf_wq_cancel_and_free(void *timer);
void bpf_list_head_free(const struct btf_field *field, void *list_head,
   struct bpf_spin_lock *spin_lock);
void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
        struct bpf_spin_lock *spin_lock);
u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena);
u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena);
int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size);

struct bpf_offload_dev;
struct bpf_offloaded_map;

struct bpf_map_dev_ops {
 int (*map_get_next_key)(struct bpf_offloaded_map *map,
    void *key, void *next_key);
 int (*map_lookup_elem)(struct bpf_offloaded_map *map,
          void *key, void *value);
 int (*map_update_elem)(struct bpf_offloaded_map *map,
          void *key, void *value, u64 flags);
 int (*map_delete_elem)(struct bpf_offloaded_map *map, void *key);
};

struct bpf_offloaded_map {
 struct bpf_map map;
 struct net_device *netdev;
 const struct bpf_map_dev_ops *dev_ops;
 void *dev_priv;
 struct list_head offloads;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct bpf_offloaded_map *map_to_offmap(struct bpf_map *map)
{
 return ({ void *__mptr = (void *)(map); _Static_assert(__builtin_types_compatible_p(typeof(*(map)), typeof(((struct bpf_offloaded_map *)0)->map)) || __builtin_types_compatible_p(typeof(*(map)), typeof(void)), "pointer type mismatch in container_of()"); ((struct bpf_offloaded_map *)(__mptr - __builtin_offsetof(struct bpf_offloaded_map, map))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_map_offload_neutral(const struct bpf_map *map)
{
 return map->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_map_support_seq_show(const struct bpf_map *map)
{
 return (map->btf_value_type_id || map->btf_vmlinux_value_type_id) &&
  map->ops->map_seq_show_elem;
}

int map_check_no_btf(const struct bpf_map *map,
       const struct btf *btf,
       const struct btf_type *key_type,
       const struct btf_type *value_type);

bool bpf_map_meta_equal(const struct bpf_map *meta0,
   const struct bpf_map *meta1);

extern const struct bpf_map_ops bpf_map_offload_ops;
# 603 "../include/linux/bpf.h"
enum bpf_type_flag {

 PTR_MAYBE_NULL = ((((1UL))) << (0 + 8)),




 MEM_RDONLY = ((((1UL))) << (1 + 8)),


 MEM_RINGBUF = ((((1UL))) << (2 + 8)),


 MEM_USER = ((((1UL))) << (3 + 8)),







 MEM_PERCPU = ((((1UL))) << (4 + 8)),


 OBJ_RELEASE = ((((1UL))) << (5 + 8)),







 PTR_UNTRUSTED = ((((1UL))) << (6 + 8)),

 MEM_UNINIT = ((((1UL))) << (7 + 8)),


 DYNPTR_TYPE_LOCAL = ((((1UL))) << (8 + 8)),


 DYNPTR_TYPE_RINGBUF = ((((1UL))) << (9 + 8)),


 MEM_FIXED_SIZE = ((((1UL))) << (10 + 8)),




 MEM_ALLOC = ((((1UL))) << (11 + 8)),
# 680 "../include/linux/bpf.h"
 PTR_TRUSTED = ((((1UL))) << (12 + 8)),


 MEM_RCU = ((((1UL))) << (13 + 8)),





 NON_OWN_REF = ((((1UL))) << (14 + 8)),


 DYNPTR_TYPE_SKB = ((((1UL))) << (15 + 8)),


 DYNPTR_TYPE_XDP = ((((1UL))) << (16 + 8)),

 __BPF_TYPE_FLAG_MAX,
 __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
};
# 711 "../include/linux/bpf.h"
enum bpf_arg_type {
 ARG_DONTCARE = 0,




 ARG_CONST_MAP_PTR,
 ARG_PTR_TO_MAP_KEY,
 ARG_PTR_TO_MAP_VALUE,




 ARG_PTR_TO_MEM,
 ARG_PTR_TO_ARENA,

 ARG_CONST_SIZE,
 ARG_CONST_SIZE_OR_ZERO,

 ARG_PTR_TO_CTX,
 ARG_ANYTHING,
 ARG_PTR_TO_SPIN_LOCK,
 ARG_PTR_TO_SOCK_COMMON,
 ARG_PTR_TO_INT,
 ARG_PTR_TO_LONG,
 ARG_PTR_TO_SOCKET,
 ARG_PTR_TO_BTF_ID,
 ARG_PTR_TO_RINGBUF_MEM,
 ARG_CONST_ALLOC_SIZE_OR_ZERO,
 ARG_PTR_TO_BTF_ID_SOCK_COMMON,
 ARG_PTR_TO_PERCPU_BTF_ID,
 ARG_PTR_TO_FUNC,
 ARG_PTR_TO_STACK,
 ARG_PTR_TO_CONST_STR,
 ARG_PTR_TO_TIMER,
 ARG_PTR_TO_KPTR,
 ARG_PTR_TO_DYNPTR,
 __BPF_ARG_TYPE_MAX,


 ARG_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MAP_VALUE,
 ARG_PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MEM,
 ARG_PTR_TO_CTX_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_CTX,
 ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET,
 ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
 ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID,



 ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | ARG_PTR_TO_MEM,

 ARG_PTR_TO_FIXED_SIZE_MEM = MEM_FIXED_SIZE | ARG_PTR_TO_MEM,




 __BPF_ARG_TYPE_LIMIT = (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1)),
};
_Static_assert(__BPF_ARG_TYPE_MAX <= (1UL << 8), "__BPF_ARG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT");


enum bpf_return_type {
 RET_INTEGER,
 RET_VOID,
 RET_PTR_TO_MAP_VALUE,
 RET_PTR_TO_SOCKET,
 RET_PTR_TO_TCP_SOCK,
 RET_PTR_TO_SOCK_COMMON,
 RET_PTR_TO_MEM,
 RET_PTR_TO_MEM_OR_BTF_ID,
 RET_PTR_TO_BTF_ID,
 __BPF_RET_TYPE_MAX,


 RET_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MAP_VALUE,
 RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET,
 RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK,
 RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON,
 RET_PTR_TO_RINGBUF_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_RINGBUF | RET_PTR_TO_MEM,
 RET_PTR_TO_DYNPTR_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MEM,
 RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID,
 RET_PTR_TO_BTF_ID_TRUSTED = PTR_TRUSTED | RET_PTR_TO_BTF_ID,




 __BPF_RET_TYPE_LIMIT = (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1)),
};
_Static_assert(__BPF_RET_TYPE_MAX <= (1UL << 8), "__BPF_RET_TYPE_MAX <= BPF_BASE_TYPE_LIMIT");





struct bpf_func_proto {
 u64 (*func)(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
 bool gpl_only;
 bool pkt_access;
 bool might_sleep;
 enum bpf_return_type ret_type;
 union {
  struct {
   enum bpf_arg_type arg1_type;
   enum bpf_arg_type arg2_type;
   enum bpf_arg_type arg3_type;
   enum bpf_arg_type arg4_type;
   enum bpf_arg_type arg5_type;
  };
  enum bpf_arg_type arg_type[5];
 };
 union {
  struct {
   u32 *arg1_btf_id;
   u32 *arg2_btf_id;
   u32 *arg3_btf_id;
   u32 *arg4_btf_id;
   u32 *arg5_btf_id;
  };
  u32 *arg_btf_id[5];
  struct {
   size_t arg1_size;
   size_t arg2_size;
   size_t arg3_size;
   size_t arg4_size;
   size_t arg5_size;
  };
  size_t arg_size[5];
 };
 int *ret_btf_id;
 bool (*allowed)(const struct bpf_prog *prog);
};





struct bpf_context;

enum bpf_access_type {
 BPF_READ = 1,
 BPF_WRITE = 2
};
# 864 "../include/linux/bpf.h"
enum bpf_reg_type {
 NOT_INIT = 0,
 SCALAR_VALUE,
 PTR_TO_CTX,
 CONST_PTR_TO_MAP,
 PTR_TO_MAP_VALUE,
 PTR_TO_MAP_KEY,
 PTR_TO_STACK,
 PTR_TO_PACKET_META,
 PTR_TO_PACKET,
 PTR_TO_PACKET_END,
 PTR_TO_FLOW_KEYS,
 PTR_TO_SOCKET,
 PTR_TO_SOCK_COMMON,
 PTR_TO_TCP_SOCK,
 PTR_TO_TP_BUFFER,
 PTR_TO_XDP_SOCK,
# 891 "../include/linux/bpf.h"
 PTR_TO_BTF_ID,




 PTR_TO_MEM,
 PTR_TO_ARENA,
 PTR_TO_BUF,
 PTR_TO_FUNC,
 CONST_PTR_TO_DYNPTR,
 __BPF_REG_TYPE_MAX,


 PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | PTR_TO_MAP_VALUE,
 PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCKET,
 PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCK_COMMON,
 PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | PTR_TO_TCP_SOCK,
 PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | PTR_TO_BTF_ID,




 __BPF_REG_TYPE_LIMIT = (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1)),
};
_Static_assert(__BPF_REG_TYPE_MAX <= (1UL << 8), "__BPF_REG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT");




struct bpf_insn_access_aux {
 enum bpf_reg_type reg_type;
 union {
  int ctx_field_size;
  struct {
   struct btf *btf;
   u32 btf_id;
  };
 };
 struct bpf_verifier_log *log;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
bpf_ctx_record_field_size(struct bpf_insn_access_aux *aux, u32 size)
{
 aux->ctx_field_size = size;
}

static bool bpf_is_ldimm64(const struct bpf_insn *insn)
{
 return insn->code == (0x00 | 0x00 | 0x18);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_pseudo_func(const struct bpf_insn *insn)
{
 return bpf_is_ldimm64(insn) && insn->src_reg == 4;
}

struct bpf_prog_ops {
 int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr,
   union bpf_attr *uattr);
};

struct bpf_reg_state;
struct bpf_verifier_ops {

 const struct bpf_func_proto *
 (*get_func_proto)(enum bpf_func_id func_id,
     const struct bpf_prog *prog);




 bool (*is_valid_access)(int off, int size, enum bpf_access_type type,
    const struct bpf_prog *prog,
    struct bpf_insn_access_aux *info);
 int (*gen_prologue)(struct bpf_insn *insn, bool direct_write,
       const struct bpf_prog *prog);
 int (*gen_ld_abs)(const struct bpf_insn *orig,
     struct bpf_insn *insn_buf);
 u32 (*convert_ctx_access)(enum bpf_access_type type,
      const struct bpf_insn *src,
      struct bpf_insn *dst,
      struct bpf_prog *prog, u32 *target_size);
 int (*btf_struct_access)(struct bpf_verifier_log *log,
     const struct bpf_reg_state *reg,
     int off, int size);
};

struct bpf_prog_offload_ops {

 int (*insn_hook)(struct bpf_verifier_env *env,
    int insn_idx, int prev_insn_idx);
 int (*finalize)(struct bpf_verifier_env *env);

 int (*replace_insn)(struct bpf_verifier_env *env, u32 off,
       struct bpf_insn *insn);
 int (*remove_insns)(struct bpf_verifier_env *env, u32 off, u32 cnt);

 int (*prepare)(struct bpf_prog *prog);
 int (*translate)(struct bpf_prog *prog);
 void (*destroy)(struct bpf_prog *prog);
};

struct bpf_prog_offload {
 struct bpf_prog *prog;
 struct net_device *netdev;
 struct bpf_offload_dev *offdev;
 void *dev_priv;
 struct list_head offloads;
 bool dev_state;
 bool opt_failed;
 void *jited_image;
 u32 jited_len;
};

enum bpf_cgroup_storage_type {
 BPF_CGROUP_STORAGE_SHARED,
 BPF_CGROUP_STORAGE_PERCPU,
 __BPF_CGROUP_STORAGE_MAX
};
# 1030 "../include/linux/bpf.h"
struct btf_func_model {
 u8 ret_size;
 u8 ret_flags;
 u8 nr_args;
 u8 arg_size[12];
 u8 arg_flags[12];
};
# 1087 "../include/linux/bpf.h"
enum {



 BPF_MAX_TRAMP_LINKS = 38,

};

struct bpf_tramp_links {
 struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS];
 int nr_links;
};

struct bpf_tramp_run_ctx;
# 1122 "../include/linux/bpf.h"
struct bpf_tramp_image;
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
    const struct btf_func_model *m, u32 flags,
    struct bpf_tramp_links *tlinks,
    void *func_addr);
void *arch_alloc_bpf_trampoline(unsigned int size);
void arch_free_bpf_trampoline(void *image, unsigned int size);
int __attribute__((__warn_unused_result__)) arch_protect_bpf_trampoline(void *image, unsigned int size);
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
        struct bpf_tramp_links *tlinks, void *func_addr);

u64 __attribute__((__no_instrument_function__)) __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog,
          struct bpf_tramp_run_ctx *run_ctx);
void __attribute__((__no_instrument_function__)) __bpf_prog_exit_sleepable_recur(struct bpf_prog *prog, u64 start,
          struct bpf_tramp_run_ctx *run_ctx);
void __attribute__((__no_instrument_function__)) __bpf_tramp_enter(struct bpf_tramp_image *tr);
void __attribute__((__no_instrument_function__)) __bpf_tramp_exit(struct bpf_tramp_image *tr);
typedef u64 (*bpf_trampoline_enter_t)(struct bpf_prog *prog,
          struct bpf_tramp_run_ctx *run_ctx);
typedef void (*bpf_trampoline_exit_t)(struct bpf_prog *prog, u64 start,
          struct bpf_tramp_run_ctx *run_ctx);
bpf_trampoline_enter_t bpf_trampoline_enter(const struct bpf_prog *prog);
bpf_trampoline_exit_t bpf_trampoline_exit(const struct bpf_prog *prog);

struct bpf_ksym {
 unsigned long start;
 unsigned long end;
 char name[512];
 struct list_head lnode;
 struct latch_tree_node tnode;
 bool prog;
};

enum bpf_tramp_prog_type {
 BPF_TRAMP_FENTRY,
 BPF_TRAMP_FEXIT,
 BPF_TRAMP_MODIFY_RETURN,
 BPF_TRAMP_MAX,
 BPF_TRAMP_REPLACE,
};

struct bpf_tramp_image {
 void *image;
 int size;
 struct bpf_ksym ksym;
 struct percpu_ref pcref;
 void *ip_after_call;
 void *ip_epilogue;
 union {
  struct callback_head rcu;
  struct work_struct work;
 };
};

struct bpf_trampoline {

 struct hlist_node hlist;
 struct ftrace_ops *fops;

 struct mutex mutex;
 refcount_t refcnt;
 u32 flags;
 u64 key;
 struct {
  struct btf_func_model model;
  void *addr;
  bool ftrace_managed;
 } func;




 struct bpf_prog *extension_prog;

 struct hlist_head progs_hlist[BPF_TRAMP_MAX];

 int progs_cnt[BPF_TRAMP_MAX];

 struct bpf_tramp_image *cur_image;
};

struct bpf_attach_target_info {
 struct btf_func_model fmodel;
 long tgt_addr;
 struct module *tgt_mod;
 const char *tgt_name;
 const struct btf_type *tgt_type;
};



struct bpf_dispatcher_prog {
 struct bpf_prog *prog;
 refcount_t users;
};

struct bpf_dispatcher {

 struct mutex mutex;
 void *func;
 struct bpf_dispatcher_prog progs[48];
 int num_progs;
 void *image;
 void *rw_image;
 u32 image_off;
 struct bpf_ksym ksym;




};





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) unsigned int bpf_dispatcher_nop_func(
 const void *ctx,
 const struct bpf_insn *insnsi,
 bpf_func_t bpf_func)
{
 return bpf_func(ctx, insnsi);
}


struct bpf_dynptr_kern {
 void *data;
# 1258 "../include/linux/bpf.h"
 u32 size;
 u32 offset;
} __attribute__((__aligned__(8)));

enum bpf_dynptr_type {
 BPF_DYNPTR_TYPE_INVALID,

 BPF_DYNPTR_TYPE_LOCAL,

 BPF_DYNPTR_TYPE_RINGBUF,

 BPF_DYNPTR_TYPE_SKB,

 BPF_DYNPTR_TYPE_XDP,
};

int bpf_dynptr_check_size(u32 size);
u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
# 1362 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
        struct bpf_trampoline *tr)
{
 return -524;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
          struct bpf_trampoline *tr)
{
 return -524;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct bpf_trampoline *bpf_trampoline_get(u64 key,
       struct bpf_attach_target_info *tgt_info)
{
 return ((void *)0);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_trampoline_put(struct bpf_trampoline *tr) {}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
           struct bpf_prog *from,
           struct bpf_prog *to) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_bpf_image_address(unsigned long address)
{
 return false;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
{
 return false;
}


struct bpf_func_info_aux {
 u16 linkage;
 bool unreliable;
 bool called : 1;
 bool verified : 1;
};

enum bpf_jit_poke_reason {
 BPF_POKE_REASON_TAIL_CALL,
};


struct bpf_jit_poke_descriptor {
 void *tailcall_target;
 void *tailcall_bypass;
 void *bypass_addr;
 void *aux;
 union {
  struct {
   struct bpf_map *map;
   u32 key;
  } tail_call;
 };
 bool tailcall_target_stable;
 u8 adj_off;
 u16 reason;
 u32 insn_idx;
};


struct bpf_ctx_arg_aux {
 u32 offset;
 enum bpf_reg_type reg_type;
 struct btf *btf;
 u32 btf_id;
};

struct btf_mod_pair {
 struct btf *btf;
 struct module *module;
};

struct bpf_kfunc_desc_tab;

struct bpf_prog_aux {
 atomic64_t refcnt;
 u32 used_map_cnt;
 u32 used_btf_cnt;
 u32 max_ctx_offset;
 u32 max_pkt_offset;
 u32 max_tp_access;
 u32 stack_depth;
 u32 id;
 u32 func_cnt;
 u32 real_func_cnt;
 u32 func_idx;
 u32 attach_btf_id;
 u32 ctx_arg_info_size;
 u32 max_rdonly_access;
 u32 max_rdwr_access;
 struct btf *attach_btf;
 const struct bpf_ctx_arg_aux *ctx_arg_info;
 struct mutex dst_mutex;
 struct bpf_prog *dst_prog;
 struct bpf_trampoline *dst_trampoline;
 enum bpf_prog_type saved_dst_prog_type;
 enum bpf_attach_type saved_dst_attach_type;
 bool verifier_zext;
 bool dev_bound;
 bool offload_requested;
 bool attach_btf_trace;
 bool attach_tracing_prog;
 bool func_proto_unreliable;
 bool tail_call_reachable;
 bool xdp_has_frags;
 bool exception_cb;
 bool exception_boundary;
 struct bpf_arena *arena;

 const struct btf_type *attach_func_proto;

 const char *attach_func_name;
 struct bpf_prog **func;
 void *jit_data;
 struct bpf_jit_poke_descriptor *poke_tab;
 struct bpf_kfunc_desc_tab *kfunc_tab;
 struct bpf_kfunc_btf_tab *kfunc_btf_tab;
 u32 size_poke_tab;



 struct bpf_ksym ksym;
 const struct bpf_prog_ops *ops;
 struct bpf_map **used_maps;
 struct mutex used_maps_mutex;
 struct btf_mod_pair *used_btfs;
 struct bpf_prog *prog;
 struct user_struct *user;
 u64 load_time;
 u32 verified_insns;
 int cgroup_atype;
 struct bpf_map *cgroup_storage[__BPF_CGROUP_STORAGE_MAX];
 char name[16U];
 u64 (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp, u64, u64);



 struct bpf_token *token;
 struct bpf_prog_offload *offload;
 struct btf *btf;
 struct bpf_func_info *func_info;
 struct bpf_func_info_aux *func_info_aux;






 struct bpf_line_info *linfo;







 void **jited_linfo;
 u32 func_info_cnt;
 u32 nr_linfo;




 u32 linfo_idx;
 struct module *mod;
 u32 num_exentries;
 struct exception_table_entry *extable;
 union {
  struct work_struct work;
  struct callback_head rcu;
 };
};

struct bpf_prog {
 u16 pages;
 u16 jited:1,
    jit_requested:1,
    gpl_compatible:1,
    cb_access:1,
    dst_needed:1,
    blinding_requested:1,
    blinded:1,
    is_func:1,
    kprobe_override:1,
    has_callchain_buf:1,
    enforce_expected_attach_type:1,
    call_get_stack:1,
    call_get_func_ip:1,
    tstamp_type_access:1,
    sleepable:1;
 enum bpf_prog_type type;
 enum bpf_attach_type expected_attach_type;
 u32 len;
 u32 jited_len;
 u8 tag[8];
 struct bpf_prog_stats *stats;
 int *active;
 unsigned int (*bpf_func)(const void *ctx,
         const struct bpf_insn *insn);
 struct bpf_prog_aux *aux;
 struct sock_fprog_kern *orig_prog;

 union {
  struct { struct { } __empty_insns; struct sock_filter insns[]; };
  struct { struct { } __empty_insnsi; struct bpf_insn insnsi[]; };
 };
};

struct bpf_array_aux {

 struct list_head poke_progs;
 struct bpf_map *map;
 struct mutex poke_mutex;
 struct work_struct work;
};

struct bpf_link {
 atomic64_t refcnt;
 u32 id;
 enum bpf_link_type type;
 const struct bpf_link_ops *ops;
 struct bpf_prog *prog;



 union {
  struct callback_head rcu;
  struct work_struct work;
 };
};

struct bpf_link_ops {
 void (*release)(struct bpf_link *link);



 void (*dealloc)(struct bpf_link *link);




 void (*dealloc_deferred)(struct bpf_link *link);
 int (*detach)(struct bpf_link *link);
 int (*update_prog)(struct bpf_link *link, struct bpf_prog *new_prog,
      struct bpf_prog *old_prog);
 void (*show_fdinfo)(const struct bpf_link *link, struct seq_file *seq);
 int (*fill_link_info)(const struct bpf_link *link,
         struct bpf_link_info *info);
 int (*update_map)(struct bpf_link *link, struct bpf_map *new_map,
     struct bpf_map *old_map);
 __poll_t (*poll)(struct file *file, struct poll_table_struct *pts);
};

struct bpf_tramp_link {
 struct bpf_link link;
 struct hlist_node tramp_hlist;
 u64 cookie;
};

struct bpf_shim_tramp_link {
 struct bpf_tramp_link link;
 struct bpf_trampoline *trampoline;
};

struct bpf_tracing_link {
 struct bpf_tramp_link link;
 enum bpf_attach_type attach_type;
 struct bpf_trampoline *trampoline;
 struct bpf_prog *tgt_prog;
};

struct bpf_raw_tp_link {
 struct bpf_link link;
 struct bpf_raw_event_map *btp;
 u64 cookie;
};

struct bpf_link_primer {
 struct bpf_link *link;
 struct file *file;
 int fd;
 u32 id;
};

struct bpf_mount_opts {
 kuid_t uid;
 kgid_t gid;
 umode_t mode;


 u64 delegate_cmds;
 u64 delegate_maps;
 u64 delegate_progs;
 u64 delegate_attachs;
};

struct bpf_token {
 struct work_struct work;
 atomic64_t refcnt;
 struct user_namespace *userns;
 u64 allowed_cmds;
 u64 allowed_maps;
 u64 allowed_progs;
 u64 allowed_attachs;



};

struct bpf_struct_ops_value;
struct btf_member;
# 1725 "../include/linux/bpf.h"
struct bpf_struct_ops {
 const struct bpf_verifier_ops *verifier_ops;
 int (*init)(struct btf *btf);
 int (*check_member)(const struct btf_type *t,
       const struct btf_member *member,
       const struct bpf_prog *prog);
 int (*init_member)(const struct btf_type *t,
      const struct btf_member *member,
      void *kdata, const void *udata);
 int (*reg)(void *kdata, struct bpf_link *link);
 void (*unreg)(void *kdata, struct bpf_link *link);
 int (*update)(void *kdata, void *old_kdata, struct bpf_link *link);
 int (*validate)(void *kdata);
 void *cfi_stubs;
 struct module *owner;
 const char *name;
 struct btf_func_model func_models[64];
};
# 1752 "../include/linux/bpf.h"
struct bpf_struct_ops_arg_info {
 struct bpf_ctx_arg_aux *info;
 u32 cnt;
};

struct bpf_struct_ops_desc {
 struct bpf_struct_ops *st_ops;

 const struct btf_type *type;
 const struct btf_type *value_type;
 u32 type_id;
 u32 value_id;


 struct bpf_struct_ops_arg_info *arg_info;
};

enum bpf_struct_ops_state {
 BPF_STRUCT_OPS_STATE_INIT,
 BPF_STRUCT_OPS_STATE_INUSE,
 BPF_STRUCT_OPS_STATE_TOBEFREE,
 BPF_STRUCT_OPS_STATE_READY,
};

struct bpf_struct_ops_common_value {
 refcount_t refcnt;
 enum bpf_struct_ops_state state;
};
# 1846 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_try_module_get(const void *data, struct module *owner)
{
 return try_module_get(owner);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_module_put(const void *data, struct module *owner)
{
 module_put(owner);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map,
           void *key,
           void *value)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_struct_ops_link_create(union bpf_attr *attr)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_ops_desc)
{
}
# 1879 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
        int cgroup_atype)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
{
}


struct bpf_array {
 struct bpf_map map;
 u32 elem_size;
 u32 index_mask;
 struct bpf_array_aux *aux;
 union {
  struct { struct { } __empty_value; char value[]; } __attribute__((__aligned__(8)));
  struct { struct { } __empty_ptrs; void * ptrs[]; } __attribute__((__aligned__(8)));
  struct { struct { } __empty_pptrs; void * pptrs[]; } __attribute__((__aligned__(8)));
 };
};







enum {
 BPF_MAX_LOOPS = 8 * 1024 * 1024,
};
# 1924 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 bpf_map_flags_to_cap(struct bpf_map *map)
{
 u32 access_flags = map->map_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG);




 if (access_flags & BPF_F_RDONLY_PROG)
  return ((((1UL))) << (0));
 else if (access_flags & BPF_F_WRONLY_PROG)
  return ((((1UL))) << (1));
 else
  return ((((1UL))) << (0)) | ((((1UL))) << (1));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_map_flags_access_ok(u32 access_flags)
{
 return (access_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG)) !=
        (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG);
}

struct bpf_event_entry {
 struct perf_event *event;
 struct file *perf_file;
 struct file *map_file;
 struct callback_head rcu;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool map_type_contains_progs(struct bpf_map *map)
{
 return map->map_type == BPF_MAP_TYPE_PROG_ARRAY ||
        map->map_type == BPF_MAP_TYPE_DEVMAP ||
        map->map_type == BPF_MAP_TYPE_CPUMAP;
}

bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp);
int bpf_prog_calc_tag(struct bpf_prog *fp);

const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void);

typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src,
     unsigned long off, unsigned long len);
typedef u32 (*bpf_convert_ctx_access_t)(enum bpf_access_type type,
     const struct bpf_insn *src,
     struct bpf_insn *dst,
     struct bpf_prog *prog,
     u32 *target_size);

u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
       void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy);
# 1988 "../include/linux/bpf.h"
struct bpf_prog_array_item {
 struct bpf_prog *prog;
 union {
  struct bpf_cgroup_storage *cgroup_storage[__BPF_CGROUP_STORAGE_MAX];
  u64 bpf_cookie;
 };
};

struct bpf_prog_array {
 union { struct { struct callback_head rcu; } ; struct bpf_prog_array_hdr { struct callback_head rcu; } hdr; } ;


 struct bpf_prog_array_item items[];
};

struct bpf_empty_prog_array {
 struct bpf_prog_array_hdr hdr;
 struct bpf_prog *null_prog;
};







extern struct bpf_empty_prog_array bpf_empty_prog_array;

struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags);
void bpf_prog_array_free(struct bpf_prog_array *progs);

void bpf_prog_array_free_sleepable(struct bpf_prog_array *progs);
int bpf_prog_array_length(struct bpf_prog_array *progs);
bool bpf_prog_array_is_empty(struct bpf_prog_array *array);
int bpf_prog_array_copy_to_user(struct bpf_prog_array *progs,
    __u32 *prog_ids, u32 cnt);

void bpf_prog_array_delete_safe(struct bpf_prog_array *progs,
    struct bpf_prog *old_prog);
int bpf_prog_array_delete_safe_at(struct bpf_prog_array *array, int index);
int bpf_prog_array_update_at(struct bpf_prog_array *array, int index,
        struct bpf_prog *prog);
int bpf_prog_array_copy_info(struct bpf_prog_array *array,
        u32 *prog_ids, u32 request_cnt,
        u32 *prog_cnt);
int bpf_prog_array_copy(struct bpf_prog_array *old_array,
   struct bpf_prog *exclude_prog,
   struct bpf_prog *include_prog,
   u64 bpf_cookie,
   struct bpf_prog_array **new_array);

struct bpf_run_ctx {};

struct bpf_cg_run_ctx {
 struct bpf_run_ctx run_ctx;
 const struct bpf_prog_array_item *prog_item;
 int retval;
};

struct bpf_trace_run_ctx {
 struct bpf_run_ctx run_ctx;
 u64 bpf_cookie;
 bool is_uprobe;
};

struct bpf_tramp_run_ctx {
 struct bpf_run_ctx run_ctx;
 u64 bpf_cookie;
 struct bpf_run_ctx *saved_run_ctx;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct bpf_run_ctx *bpf_set_run_ctx(struct bpf_run_ctx *new_ctx)
{
 struct bpf_run_ctx *old_ctx = ((void *)0);


 old_ctx = (__current_thread_info->task)->bpf_ctx;
 (__current_thread_info->task)->bpf_ctx = new_ctx;

 return old_ctx;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_reset_run_ctx(struct bpf_run_ctx *old_ctx)
{

 (__current_thread_info->task)->bpf_ctx = old_ctx;

}






typedef u32 (*bpf_prog_run_fn)(const struct bpf_prog *prog, const void *ctx);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32
bpf_prog_run_array(const struct bpf_prog_array *array,
     const void *ctx, bpf_prog_run_fn run_prog)
{
 const struct bpf_prog_array_item *item;
 const struct bpf_prog *prog;
 struct bpf_run_ctx *old_run_ctx;
 struct bpf_trace_run_ctx run_ctx;
 u32 ret = 1;

 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_read_lock_held()) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/bpf.h", 2094, "no rcu lock held"); } } while (0);

 if (__builtin_expect(!!(!array), 0))
  return ret;

 run_ctx.is_uprobe = false;

 migrate_disable();
 old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
 item = &array->items[0];
 while ((prog = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_320(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(item->prog) == sizeof(char) || sizeof(item->prog) == sizeof(short) || sizeof(item->prog) == sizeof(int) || sizeof(item->prog) == sizeof(long)) || sizeof(item->prog) == sizeof(long long))) __compiletime_assert_320(); } while (0); (*(const volatile typeof( _Generic((item->prog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (item->prog))) *)&(item->prog)); }))) {
  run_ctx.bpf_cookie = item->bpf_cookie;
  ret &= run_prog(prog, ctx);
  item++;
 }
 bpf_reset_run_ctx(old_run_ctx);
 migrate_enable();
 return ret;
}
# 2124 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) u32
bpf_prog_run_array_uprobe(const struct bpf_prog_array *array_rcu,
     const void *ctx, bpf_prog_run_fn run_prog)
{
 const struct bpf_prog_array_item *item;
 const struct bpf_prog *prog;
 const struct bpf_prog_array *array;
 struct bpf_run_ctx *old_run_ctx;
 struct bpf_trace_run_ctx run_ctx;
 u32 ret = 1;

 __might_fault("include/linux/bpf.h", 2135);

 rcu_read_lock_trace();
 migrate_disable();

 run_ctx.is_uprobe = true;

 array = ({ typeof(*(array_rcu)) *__UNIQUE_ID_rcu321 = (typeof(*(array_rcu)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_322(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((array_rcu)) == sizeof(char) || sizeof((array_rcu)) == sizeof(short) || sizeof((array_rcu)) == sizeof(int) || sizeof((array_rcu)) == sizeof(long)) || sizeof((array_rcu)) == sizeof(long long))) __compiletime_assert_322(); } while (0); (*(const volatile typeof( _Generic(((array_rcu)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((array_rcu)))) *)&((array_rcu))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((rcu_read_lock_trace_held()) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/bpf.h", 2142, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(array_rcu)) *)(__UNIQUE_ID_rcu321)); });
 if (__builtin_expect(!!(!array), 0))
  goto out;
 old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
 item = &array->items[0];
 while ((prog = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_323(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(item->prog) == sizeof(char) || sizeof(item->prog) == sizeof(short) || sizeof(item->prog) == sizeof(int) || sizeof(item->prog) == sizeof(long)) || sizeof(item->prog) == sizeof(long long))) __compiletime_assert_323(); } while (0); (*(const volatile typeof( _Generic((item->prog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (item->prog))) *)&(item->prog)); }))) {
  if (!prog->sleepable)
   rcu_read_lock();

  run_ctx.bpf_cookie = item->bpf_cookie;
  ret &= run_prog(prog, ctx);
  item++;

  if (!prog->sleepable)
   rcu_read_unlock();
 }
 bpf_reset_run_ctx(old_run_ctx);
out:
 migrate_enable();
 rcu_read_unlock_trace();
 return ret;
}


extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_bpf_prog_active; extern __attribute__((section(".data" ""))) __typeof__(int) bpf_prog_active;
extern struct mutex bpf_stats_enabled_mutex;







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_disable_instrumentation(void)
{
 migrate_disable();
 do { do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(bpf_prog_active)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_enable_instrumentation(void)
{
 do { do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(bpf_prog_active)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += -(typeof(bpf_prog_active))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += -(typeof(bpf_prog_active))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += -(typeof(bpf_prog_active))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(bpf_prog_active)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(bpf_prog_active))) *)(&(bpf_prog_active)); }); }) += -(typeof(bpf_prog_active))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
 migrate_enable();
}

extern const struct super_operations bpf_super_ops;
extern const struct file_operations bpf_map_fops;
extern const struct file_operations bpf_prog_fops;
extern const struct file_operations bpf_iter_fops;







# 1 "../include/linux/bpf_types.h" 1




extern const struct bpf_prog_ops sk_filter_prog_ops; extern const struct bpf_verifier_ops sk_filter_verifier_ops;

extern const struct bpf_prog_ops tc_cls_act_prog_ops; extern const struct bpf_verifier_ops tc_cls_act_verifier_ops;

extern const struct bpf_prog_ops tc_cls_act_prog_ops; extern const struct bpf_verifier_ops tc_cls_act_verifier_ops;

extern const struct bpf_prog_ops xdp_prog_ops; extern const struct bpf_verifier_ops xdp_verifier_ops;
# 21 "../include/linux/bpf_types.h"
extern const struct bpf_prog_ops lwt_in_prog_ops; extern const struct bpf_verifier_ops lwt_in_verifier_ops;

extern const struct bpf_prog_ops lwt_out_prog_ops; extern const struct bpf_verifier_ops lwt_out_verifier_ops;

extern const struct bpf_prog_ops lwt_xmit_prog_ops; extern const struct bpf_verifier_ops lwt_xmit_verifier_ops;

extern const struct bpf_prog_ops lwt_seg6local_prog_ops; extern const struct bpf_verifier_ops lwt_seg6local_verifier_ops;

extern const struct bpf_prog_ops sock_ops_prog_ops; extern const struct bpf_verifier_ops sock_ops_verifier_ops;

extern const struct bpf_prog_ops sk_skb_prog_ops; extern const struct bpf_verifier_ops sk_skb_verifier_ops;

extern const struct bpf_prog_ops sk_msg_prog_ops; extern const struct bpf_verifier_ops sk_msg_verifier_ops;

extern const struct bpf_prog_ops flow_dissector_prog_ops; extern const struct bpf_verifier_ops flow_dissector_verifier_ops;
# 65 "../include/linux/bpf_types.h"
extern const struct bpf_prog_ops sk_reuseport_prog_ops; extern const struct bpf_verifier_ops sk_reuseport_verifier_ops;

extern const struct bpf_prog_ops sk_lookup_prog_ops; extern const struct bpf_verifier_ops sk_lookup_verifier_ops;
# 80 "../include/linux/bpf_types.h"
extern const struct bpf_prog_ops bpf_syscall_prog_ops; extern const struct bpf_verifier_ops bpf_syscall_verifier_ops;






extern const struct bpf_map_ops array_map_ops;
extern const struct bpf_map_ops percpu_array_map_ops;
extern const struct bpf_map_ops prog_array_map_ops;
extern const struct bpf_map_ops perf_event_array_map_ops;
# 99 "../include/linux/bpf_types.h"
extern const struct bpf_map_ops htab_map_ops;
extern const struct bpf_map_ops htab_percpu_map_ops;
extern const struct bpf_map_ops htab_lru_map_ops;
extern const struct bpf_map_ops htab_lru_percpu_map_ops;
extern const struct bpf_map_ops trie_map_ops;



extern const struct bpf_map_ops array_of_maps_map_ops;
extern const struct bpf_map_ops htab_of_maps_map_ops;



extern const struct bpf_map_ops task_storage_map_ops;

extern const struct bpf_map_ops dev_map_ops;
extern const struct bpf_map_ops dev_map_hash_ops;
extern const struct bpf_map_ops sk_storage_map_ops;
extern const struct bpf_map_ops cpu_map_ops;

extern const struct bpf_map_ops xsk_map_ops;


extern const struct bpf_map_ops sock_map_ops;
extern const struct bpf_map_ops sock_hash_ops;
extern const struct bpf_map_ops reuseport_array_ops;


extern const struct bpf_map_ops queue_map_ops;
extern const struct bpf_map_ops stack_map_ops;



extern const struct bpf_map_ops ringbuf_map_ops;
extern const struct bpf_map_ops bloom_filter_map_ops;
extern const struct bpf_map_ops user_ringbuf_map_ops;
extern const struct bpf_map_ops arena_map_ops;
# 2199 "../include/linux/bpf.h" 2




extern const struct bpf_prog_ops bpf_offload_prog_ops;
extern const struct bpf_verifier_ops tc_cls_act_analyzer_ops;
extern const struct bpf_verifier_ops xdp_analyzer_ops;

struct bpf_prog *bpf_prog_get(u32 ufd);
struct bpf_prog *bpf_prog_get_type_dev(u32 ufd, enum bpf_prog_type type,
           bool attach_drv);
void bpf_prog_add(struct bpf_prog *prog, int i);
void bpf_prog_sub(struct bpf_prog *prog, int i);
void bpf_prog_inc(struct bpf_prog *prog);
struct bpf_prog * __attribute__((__warn_unused_result__)) bpf_prog_inc_not_zero(struct bpf_prog *prog);
void bpf_prog_put(struct bpf_prog *prog);

void bpf_prog_free_id(struct bpf_prog *prog);
void bpf_map_free_id(struct bpf_map *map);

struct btf_field *btf_record_find(const struct btf_record *rec,
      u32 offset, u32 field_mask);
void btf_record_free(struct btf_record *rec);
void bpf_map_free_record(struct bpf_map *map);
struct btf_record *btf_record_dup(const struct btf_record *rec);
bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *rec_b);
void bpf_obj_free_timer(const struct btf_record *rec, void *obj);
void bpf_obj_free_workqueue(const struct btf_record *rec, void *obj);
void bpf_obj_free_fields(const struct btf_record *rec, void *obj);
void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu);

struct bpf_map *bpf_map_get(u32 ufd);
struct bpf_map *bpf_map_get_with_uref(u32 ufd);
struct bpf_map *__bpf_map_get(struct fd f);
void bpf_map_inc(struct bpf_map *map);
void bpf_map_inc_with_uref(struct bpf_map *map);
struct bpf_map *__bpf_map_inc_not_zero(struct bpf_map *map, bool uref);
struct bpf_map * __attribute__((__warn_unused_result__)) bpf_map_inc_not_zero(struct bpf_map *map);
void bpf_map_put_with_uref(struct bpf_map *map);
void bpf_map_put(struct bpf_map *map);
void *bpf_map_area_alloc(u64 size, int numa_node);
void *bpf_map_area_mmapable_alloc(u64 size, int numa_node);
void bpf_map_area_free(void *base);
bool bpf_map_write_active(const struct bpf_map *map);
void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
int generic_map_lookup_batch(struct bpf_map *map,
         const union bpf_attr *attr,
         union bpf_attr *uattr);
int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
         const union bpf_attr *attr,
         union bpf_attr *uattr);
int generic_map_delete_batch(struct bpf_map *map,
         const union bpf_attr *attr,
         union bpf_attr *uattr);
struct bpf_map *bpf_map_get_curr_or_next(u32 *id);
struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id);

int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid,
   unsigned long nr_pages, struct page **page_array);
# 2281 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
bpf_map_init_elem_count(struct bpf_map *map)
{
 size_t size = sizeof(*map->elem_count), align = size;
 gfp_t flags = ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))) | (( gfp_t)((((1UL))) << (___GFP_HARDWALL_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_NOWARN_BIT)));

 map->elem_count = ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(pcpu_alloc_noprof(size, align, false, flags)) _res = pcpu_alloc_noprof(size, align, false, flags); do {} while (0); _res; }); });
 if (!map->elem_count)
  return -12;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
bpf_map_free_elem_count(struct bpf_map *map)
{
 free_percpu(map->elem_count);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_map_inc_elem_count(struct bpf_map *map)
{
 do { do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*map->elem_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_map_dec_elem_count(struct bpf_map *map)
{
 do { do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*map->elem_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += -(typeof(*map->elem_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += -(typeof(*map->elem_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += -(typeof(*map->elem_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*map->elem_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*map->elem_count))) *)(&(*map->elem_count)); }); }) += -(typeof(*map->elem_count))(1); } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

extern int sysctl_unprivileged_bpf_disabled;

bool bpf_token_capable(const struct bpf_token *token, int cap);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_allow_ptr_leaks(const struct bpf_token *token)
{
 return bpf_token_capable(token, 38);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_allow_uninit_stack(const struct bpf_token *token)
{
 return bpf_token_capable(token, 38);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_bypass_spec_v1(const struct bpf_token *token)
{
 return cpu_mitigations_off() || bpf_token_capable(token, 38);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_bypass_spec_v4(const struct bpf_token *token)
{
 return cpu_mitigations_off() || bpf_token_capable(token, 38);
}

int bpf_map_new_fd(struct bpf_map *map, int flags);
int bpf_prog_new_fd(struct bpf_prog *prog);

void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
     const struct bpf_link_ops *ops, struct bpf_prog *prog);
int bpf_link_prime(struct bpf_link *link, struct bpf_link_primer *primer);
int bpf_link_settle(struct bpf_link_primer *primer);
void bpf_link_cleanup(struct bpf_link_primer *primer);
void bpf_link_inc(struct bpf_link *link);
struct bpf_link *bpf_link_inc_not_zero(struct bpf_link *link);
void bpf_link_put(struct bpf_link *link);
int bpf_link_new_fd(struct bpf_link *link);
struct bpf_link *bpf_link_get_from_fd(u32 ufd);
struct bpf_link *bpf_link_get_curr_or_next(u32 *id);

void bpf_token_inc(struct bpf_token *token);
void bpf_token_put(struct bpf_token *token);
int bpf_token_create(union bpf_attr *attr);
struct bpf_token *bpf_token_get_from_fd(u32 ufd);

bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd);
bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type);
bool bpf_token_allow_prog_type(const struct bpf_token *token,
          enum bpf_prog_type prog_type,
          enum bpf_attach_type attach_type);

int bpf_obj_pin_user(u32 ufd, int path_fd, const char *pathname);
int bpf_obj_get_user(int path_fd, const char *pathname, int flags);
struct inode *bpf_get_inode(struct super_block *sb, const struct inode *dir,
       umode_t mode);
# 2385 "../include/linux/bpf.h"
enum bpf_iter_task_type {
 BPF_TASK_ITER_ALL = 0,
 BPF_TASK_ITER_TID,
 BPF_TASK_ITER_TGID,
};

struct bpf_iter_aux_info {

 struct bpf_map *map;


 struct {
  struct cgroup *start;
  enum bpf_cgroup_iter_order order;
 } cgroup;
 struct {
  enum bpf_iter_task_type type;
  u32 pid;
 } task;
};

typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
     union bpf_iter_link_info *linfo,
     struct bpf_iter_aux_info *aux);
typedef void (*bpf_iter_detach_target_t)(struct bpf_iter_aux_info *aux);
typedef void (*bpf_iter_show_fdinfo_t) (const struct bpf_iter_aux_info *aux,
     struct seq_file *seq);
typedef int (*bpf_iter_fill_link_info_t)(const struct bpf_iter_aux_info *aux,
      struct bpf_link_info *info);
typedef const struct bpf_func_proto *
(*bpf_iter_get_func_proto_t)(enum bpf_func_id func_id,
        const struct bpf_prog *prog);

enum bpf_iter_feature {
 BPF_ITER_RESCHED = ((((1UL))) << (0)),
};


struct bpf_iter_reg {
 const char *target;
 bpf_iter_attach_target_t attach_target;
 bpf_iter_detach_target_t detach_target;
 bpf_iter_show_fdinfo_t show_fdinfo;
 bpf_iter_fill_link_info_t fill_link_info;
 bpf_iter_get_func_proto_t get_func_proto;
 u32 ctx_arg_info_size;
 u32 feature;
 struct bpf_ctx_arg_aux ctx_arg_info[2];
 const struct bpf_iter_seq_info *seq_info;
};

struct bpf_iter_meta {
 union { struct seq_file * seq; __u64 :64; } __attribute__((aligned(8)));
 u64 session_id;
 u64 seq_num;
};

struct bpf_iter__bpf_map_elem {
 union { struct bpf_iter_meta * meta; __u64 :64; } __attribute__((aligned(8)));
 union { struct bpf_map * map; __u64 :64; } __attribute__((aligned(8)));
 union { void * key; __u64 :64; } __attribute__((aligned(8)));
 union { void * value; __u64 :64; } __attribute__((aligned(8)));
};

int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info);
void bpf_iter_unreg_target(const struct bpf_iter_reg *reg_info);
bool bpf_iter_prog_supported(struct bpf_prog *prog);
const struct bpf_func_proto *
bpf_iter_get_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr, struct bpf_prog *prog);
int bpf_iter_new_fd(struct bpf_link *link);
bool bpf_link_is_iter(struct bpf_link *link);
struct bpf_prog *bpf_iter_get_info(struct bpf_iter_meta *meta, bool in_stop);
int bpf_iter_run_prog(struct bpf_prog *prog, void *ctx);
void bpf_iter_map_show_fdinfo(const struct bpf_iter_aux_info *aux,
         struct seq_file *seq);
int bpf_iter_map_fill_link_info(const struct bpf_iter_aux_info *aux,
    struct bpf_link_info *info);

int map_set_for_each_callback_args(struct bpf_verifier_env *env,
       struct bpf_func_state *caller,
       struct bpf_func_state *callee);

int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value);
int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value);
int bpf_percpu_hash_update(struct bpf_map *map, void *key, void *value,
      u64 flags);
int bpf_percpu_array_update(struct bpf_map *map, void *key, void *value,
       u64 flags);

int bpf_stackmap_copy(struct bpf_map *map, void *key, void *value);

int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
     void *key, void *value, u64 map_flags);
int bpf_fd_array_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
    void *key, void *value, u64 map_flags);
int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);

int bpf_get_file_flag(int flags);
int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size,
        size_t actual_size);


int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size);


void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);


struct btf *bpf_get_btf_vmlinux(void);


struct xdp_frame;
struct sk_buff;
struct bpf_dtab_netdev;
struct bpf_cpu_map_entry;

void __dev_flush(struct list_head *flush_list);
int dev_xdp_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
      struct net_device *dev_rx);
int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_frame *xdpf,
      struct net_device *dev_rx);
int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_rx,
     struct bpf_map *map, bool exclude_ingress);
int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
        struct bpf_prog *xdp_prog);
int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb,
      struct bpf_prog *xdp_prog, struct bpf_map *map,
      bool exclude_ingress);

void __cpu_map_flush(struct list_head *flush_list);
int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf,
      struct net_device *dev_rx);
int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
        struct sk_buff *skb);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int bpf_map_attr_numa_node(const union bpf_attr *attr)
{
 return (attr->map_flags & BPF_F_NUMA_NODE) ?
  attr->numa_node : (-1);
}

struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type type);
int array_map_alloc_check(union bpf_attr *attr);

int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
     union bpf_attr *uattr);
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
     union bpf_attr *uattr);
int bpf_prog_test_run_tracing(struct bpf_prog *prog,
         const union bpf_attr *kattr,
         union bpf_attr *uattr);
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
         const union bpf_attr *kattr,
         union bpf_attr *uattr);
int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
        const union bpf_attr *kattr,
        union bpf_attr *uattr);
int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
    const union bpf_attr *kattr,
    union bpf_attr *uattr);
int bpf_prog_test_run_nf(struct bpf_prog *prog,
    const union bpf_attr *kattr,
    union bpf_attr *uattr);
bool btf_ctx_access(int off, int size, enum bpf_access_type type,
      const struct bpf_prog *prog,
      struct bpf_insn_access_aux *info);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_tracing_ctx_access(int off, int size,
       enum bpf_access_type type)
{
 if (off < 0 || off >= sizeof(__u64) * 12)
  return false;
 if (type != BPF_READ)
  return false;
 if (off % size != 0)
  return false;
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_tracing_btf_ctx_access(int off, int size,
           enum bpf_access_type type,
           const struct bpf_prog *prog,
           struct bpf_insn_access_aux *info)
{
 if (!bpf_tracing_ctx_access(off, size, type))
  return false;
 return btf_ctx_access(off, size, type, prog, info);
}

int btf_struct_access(struct bpf_verifier_log *log,
        const struct bpf_reg_state *reg,
        int off, int size, enum bpf_access_type atype,
        u32 *next_btf_id, enum bpf_type_flag *flag, const char **field_name);
bool btf_struct_ids_match(struct bpf_verifier_log *log,
     const struct btf *btf, u32 id, int off,
     const struct btf *need_btf, u32 need_type_id,
     bool strict);

int btf_distill_func_proto(struct bpf_verifier_log *log,
      struct btf *btf,
      const struct btf_type *func_proto,
      const char *func_name,
      struct btf_func_model *m);

struct bpf_reg_state;
int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog);
int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog,
    struct btf *btf, const struct btf_type *t);
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt,
        int comp_idx, const char *tag_key);
int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt,
      int comp_idx, const char *tag_key, int last_id);

struct bpf_prog *bpf_prog_by_id(u32 id);
struct bpf_link *bpf_link_by_id(u32 id);

const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id,
       const struct bpf_prog *prog);
void bpf_task_storage_free(struct task_struct *task);
void bpf_cgrp_storage_free(struct cgroup *cgroup);
bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
const struct btf_func_model *
bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
    const struct bpf_insn *insn);
int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
         u16 btf_fd_idx, u8 **func_addr);

struct bpf_core_ctx {
 struct bpf_verifier_log *log;
 const struct btf *btf;
};

bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
    const struct bpf_reg_state *reg,
    const char *field_name, u32 btf_id, const char *suffix);

bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log,
          const struct btf *reg_btf, u32 reg_id,
          const struct btf *arg_btf, u32 arg_id);

int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
     int relo_idx, void *insn);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool unprivileged_ebpf_enabled(void)
{
 return !sysctl_unprivileged_bpf_disabled;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool has_current_bpf_ctx(void)
{
 return !!(__current_thread_info->task)->bpf_ctx;
}

void __attribute__((__no_instrument_function__)) bpf_prog_inc_misses_counter(struct bpf_prog *prog);

void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
       enum bpf_dynptr_type type, u32 offset, u32 size);
void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr);
# 2928 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int
bpf_probe_read_kernel_common(void *dst, u32 size, const void *unsafe_ptr)
{
 int ret = -14;

 if (0)
  ret = copy_from_kernel_nofault(dst, unsafe_ptr, size);
 if (__builtin_expect(!!(ret < 0), 0))
  memset(dst, 0, size);
 return ret;
}

void __bpf_free_used_btfs(struct btf_mod_pair *used_btfs, u32 len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct bpf_prog *bpf_prog_get_type(u32 ufd,
       enum bpf_prog_type type)
{
 return bpf_prog_get_type_dev(ufd, type, false);
}

void __bpf_free_used_maps(struct bpf_prog_aux *aux,
     struct bpf_map **used_maps, u32 len);

bool bpf_prog_get_ok(struct bpf_prog *, enum bpf_prog_type *, bool);

int bpf_prog_offload_compile(struct bpf_prog *prog);
void bpf_prog_dev_bound_destroy(struct bpf_prog *prog);
int bpf_prog_offload_info_fill(struct bpf_prog_info *info,
          struct bpf_prog *prog);

int bpf_map_offload_info_fill(struct bpf_map_info *info, struct bpf_map *map);

int bpf_map_offload_lookup_elem(struct bpf_map *map, void *key, void *value);
int bpf_map_offload_update_elem(struct bpf_map *map,
    void *key, void *value, u64 flags);
int bpf_map_offload_delete_elem(struct bpf_map *map, void *key);
int bpf_map_offload_get_next_key(struct bpf_map *map,
     void *key, void *next_key);

bool bpf_offload_prog_map_match(struct bpf_prog *prog, struct bpf_map *map);

struct bpf_offload_dev *
bpf_offload_dev_create(const struct bpf_prog_offload_ops *ops, void *priv);
void bpf_offload_dev_destroy(struct bpf_offload_dev *offdev);
void *bpf_offload_dev_priv(struct bpf_offload_dev *offdev);
int bpf_offload_dev_netdev_register(struct bpf_offload_dev *offdev,
        struct net_device *netdev);
void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev,
           struct net_device *netdev);
bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev);

void unpriv_ebpf_notify(int new_state);


int bpf_dev_bound_kfunc_check(struct bpf_verifier_log *log,
         struct bpf_prog_aux *prog_aux);
void *bpf_dev_bound_resolve_kfunc(struct bpf_prog *prog, u32 func_id);
int bpf_prog_dev_bound_init(struct bpf_prog *prog, union bpf_attr *attr);
int bpf_prog_dev_bound_inherit(struct bpf_prog *new_prog, struct bpf_prog *old_prog);
void bpf_dev_bound_netdev_unregister(struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_prog_is_dev_bound(const struct bpf_prog_aux *aux)
{
 return aux->dev_bound;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_prog_is_offloaded(const struct bpf_prog_aux *aux)
{
 return aux->offload_requested;
}

bool bpf_prog_dev_bound_match(const struct bpf_prog *lhs, const struct bpf_prog *rhs);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_map_is_offloaded(struct bpf_map *map)
{
 return __builtin_expect(!!(map->ops == &bpf_map_offload_ops), 0);
}

struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr);
void bpf_map_offload_map_free(struct bpf_map *map);
u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map);
int bpf_prog_test_run_syscall(struct bpf_prog *prog,
         const union bpf_attr *kattr,
         union bpf_attr *uattr);

int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog);
int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
int sock_map_update_elem_sys(struct bpf_map *map, void *key, void *value, u64 flags);
int sock_map_bpf_prog_query(const union bpf_attr *attr,
       union bpf_attr *uattr);
int sock_map_link_create(const union bpf_attr *attr, struct bpf_prog *prog);

void sock_map_unhash(struct sock *sk);
void sock_map_destroy(struct sock *sk);
void sock_map_close(struct sock *sk, long timeout);
# 3125 "../include/linux/bpf.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void
bpf_prog_inc_misses_counters(const struct bpf_prog_array *array)
{
 const struct bpf_prog_array_item *item;
 struct bpf_prog *prog;

 if (__builtin_expect(!!(!array), 0))
  return;

 item = &array->items[0];
 while ((prog = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_324(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(item->prog) == sizeof(char) || sizeof(item->prog) == sizeof(short) || sizeof(item->prog) == sizeof(int) || sizeof(item->prog) == sizeof(long)) || sizeof(item->prog) == sizeof(long long))) __compiletime_assert_324(); } while (0); (*(const volatile typeof( _Generic((item->prog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (item->prog))) *)&(item->prog)); }))) {
  bpf_prog_inc_misses_counter(prog);
  item++;
 }
}


void bpf_sk_reuseport_detach(struct sock *sk);
int bpf_fd_reuseport_array_lookup_elem(struct bpf_map *map, void *key,
           void *value);
int bpf_fd_reuseport_array_update_elem(struct bpf_map *map, void *key,
           void *value, u64 map_flags);
# 3169 "../include/linux/bpf.h"
extern const struct bpf_func_proto bpf_map_lookup_elem_proto;
extern const struct bpf_func_proto bpf_map_update_elem_proto;
extern const struct bpf_func_proto bpf_map_delete_elem_proto;
extern const struct bpf_func_proto bpf_map_push_elem_proto;
extern const struct bpf_func_proto bpf_map_pop_elem_proto;
extern const struct bpf_func_proto bpf_map_peek_elem_proto;
extern const struct bpf_func_proto bpf_map_lookup_percpu_elem_proto;

extern const struct bpf_func_proto bpf_get_prandom_u32_proto;
extern const struct bpf_func_proto bpf_get_smp_processor_id_proto;
extern const struct bpf_func_proto bpf_get_numa_node_id_proto;
extern const struct bpf_func_proto bpf_tail_call_proto;
extern const struct bpf_func_proto bpf_ktime_get_ns_proto;
extern const struct bpf_func_proto bpf_ktime_get_boot_ns_proto;
extern const struct bpf_func_proto bpf_ktime_get_tai_ns_proto;
extern const struct bpf_func_proto bpf_get_current_pid_tgid_proto;
extern const struct bpf_func_proto bpf_get_current_uid_gid_proto;
extern const struct bpf_func_proto bpf_get_current_comm_proto;
extern const struct bpf_func_proto bpf_get_stackid_proto;
extern const struct bpf_func_proto bpf_get_stack_proto;
extern const struct bpf_func_proto bpf_get_task_stack_proto;
extern const struct bpf_func_proto bpf_get_stackid_proto_pe;
extern const struct bpf_func_proto bpf_get_stack_proto_pe;
extern const struct bpf_func_proto bpf_sock_map_update_proto;
extern const struct bpf_func_proto bpf_sock_hash_update_proto;
extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_cgroup_classid_curr_proto;
extern const struct bpf_func_proto bpf_msg_redirect_hash_proto;
extern const struct bpf_func_proto bpf_msg_redirect_map_proto;
extern const struct bpf_func_proto bpf_sk_redirect_hash_proto;
extern const struct bpf_func_proto bpf_sk_redirect_map_proto;
extern const struct bpf_func_proto bpf_spin_lock_proto;
extern const struct bpf_func_proto bpf_spin_unlock_proto;
extern const struct bpf_func_proto bpf_get_local_storage_proto;
extern const struct bpf_func_proto bpf_strtol_proto;
extern const struct bpf_func_proto bpf_strtoul_proto;
extern const struct bpf_func_proto bpf_tcp_sock_proto;
extern const struct bpf_func_proto bpf_jiffies64_proto;
extern const struct bpf_func_proto bpf_get_ns_current_pid_tgid_proto;
extern const struct bpf_func_proto bpf_event_output_data_proto;
extern const struct bpf_func_proto bpf_ringbuf_output_proto;
extern const struct bpf_func_proto bpf_ringbuf_reserve_proto;
extern const struct bpf_func_proto bpf_ringbuf_submit_proto;
extern const struct bpf_func_proto bpf_ringbuf_discard_proto;
extern const struct bpf_func_proto bpf_ringbuf_query_proto;
extern const struct bpf_func_proto bpf_ringbuf_reserve_dynptr_proto;
extern const struct bpf_func_proto bpf_ringbuf_submit_dynptr_proto;
extern const struct bpf_func_proto bpf_ringbuf_discard_dynptr_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp6_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp_timewait_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp_request_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_udp6_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_unix_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_mptcp_sock_proto;
extern const struct bpf_func_proto bpf_copy_from_user_proto;
extern const struct bpf_func_proto bpf_snprintf_btf_proto;
extern const struct bpf_func_proto bpf_snprintf_proto;
extern const struct bpf_func_proto bpf_per_cpu_ptr_proto;
extern const struct bpf_func_proto bpf_this_cpu_ptr_proto;
extern const struct bpf_func_proto bpf_ktime_get_coarse_ns_proto;
extern const struct bpf_func_proto bpf_sock_from_file_proto;
extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto;
extern const struct bpf_func_proto bpf_task_storage_get_recur_proto;
extern const struct bpf_func_proto bpf_task_storage_get_proto;
extern const struct bpf_func_proto bpf_task_storage_delete_recur_proto;
extern const struct bpf_func_proto bpf_task_storage_delete_proto;
extern const struct bpf_func_proto bpf_for_each_map_elem_proto;
extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto;
extern const struct bpf_func_proto bpf_sk_setsockopt_proto;
extern const struct bpf_func_proto bpf_sk_getsockopt_proto;
extern const struct bpf_func_proto bpf_unlocked_sk_setsockopt_proto;
extern const struct bpf_func_proto bpf_unlocked_sk_getsockopt_proto;
extern const struct bpf_func_proto bpf_find_vma_proto;
extern const struct bpf_func_proto bpf_loop_proto;
extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
extern const struct bpf_func_proto bpf_set_retval_proto;
extern const struct bpf_func_proto bpf_get_retval_proto;
extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;

const struct bpf_func_proto *tracing_prog_func_proto(
  enum bpf_func_id func_id, const struct bpf_prog *prog);


void bpf_user_rnd_init_once(void);
u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);


bool bpf_sock_common_is_valid_access(int off, int size,
         enum bpf_access_type type,
         struct bpf_insn_access_aux *info);
bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
         struct bpf_insn_access_aux *info);
u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
    const struct bpf_insn *si,
    struct bpf_insn *insn_buf,
    struct bpf_prog *prog,
    u32 *target_size);
int bpf_dynptr_from_skb_rdonly(struct __sk_buff *skb, u64 flags,
          struct bpf_dynptr *ptr);
# 3302 "../include/linux/bpf.h"
struct sk_reuseport_kern {
 struct sk_buff *skb;
 struct sock *sk;
 struct sock *selected_sk;
 struct sock *migrating_sk;
 void *data_end;
 u32 hash;
 u32 reuseport_id;
 bool bind_inany;
};
bool bpf_tcp_sock_is_valid_access(int off, int size, enum bpf_access_type type,
      struct bpf_insn_access_aux *info);

u32 bpf_tcp_sock_convert_ctx_access(enum bpf_access_type type,
        const struct bpf_insn *si,
        struct bpf_insn *insn_buf,
        struct bpf_prog *prog,
        u32 *target_size);

bool bpf_xdp_sock_is_valid_access(int off, int size, enum bpf_access_type type,
      struct bpf_insn_access_aux *info);

u32 bpf_xdp_sock_convert_ctx_access(enum bpf_access_type type,
        const struct bpf_insn *si,
        struct bpf_insn *insn_buf,
        struct bpf_prog *prog,
        u32 *target_size);
# 3362 "../include/linux/bpf.h"
enum bpf_text_poke_type {
 BPF_MOD_CALL,
 BPF_MOD_JUMP,
};

int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
         void *addr1, void *addr2);

void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
          struct bpf_prog *new, struct bpf_prog *old);

void *bpf_arch_text_copy(void *dst, void *src, size_t len);
int bpf_arch_text_invalidate(void *dst, size_t len);

struct btf_id_set;
bool btf_id_set_contains(const struct btf_id_set *set, u32 id);




struct bpf_bprintf_data {
 u32 *bin_args;
 char *buf;
 bool get_bin_args;
 bool get_buf;
};

int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
   u32 num_args, struct bpf_bprintf_data *data);
void bpf_bprintf_cleanup(struct bpf_bprintf_data *data);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void bpf_cgroup_atype_put(int cgroup_atype) {}


struct key;


struct bpf_key {
 struct key *key;
 bool has_ref;
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool type_is_alloc(u32 type)
{
 return type & MEM_ALLOC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t bpf_memcg_flags(gfp_t flags)
{
 if (memcg_bpf_enabled())
  return flags | (( gfp_t)((((1UL))) << (___GFP_ACCOUNT_BIT)));
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool bpf_is_subprog(const struct bpf_prog *prog)
{
 return prog->aux->func_idx != 0;
}
# 36 "../include/linux/security.h" 2
# 1 "../include/uapi/linux/lsm.h" 1
# 35 "../include/uapi/linux/lsm.h"
struct lsm_ctx {
 __u64 id;
 __u64 flags;
 __u64 len;
 __u64 ctx_len;
 __u8 ctx[] __attribute__((__counted_by__(ctx_len)));
};
# 37 "../include/linux/security.h" 2

struct linux_binprm;
struct cred;
struct rlimit;
struct kernel_siginfo;
struct sembuf;
struct kern_ipc_perm;
struct audit_context;
struct super_block;
struct inode;
struct dentry;
struct file;
struct vfsmount;
struct path;
struct qstr;
struct iattr;
struct fown_struct;
struct file_operations;
struct msg_msg;
struct xattr;
struct kernfs_node;
struct xfrm_sec_ctx;
struct mm_struct;
struct fs_context;
struct fs_parameter;
enum fs_value_type;
struct watch;
struct watch_notification;
struct lsm_ctx;
# 77 "../include/linux/security.h"
struct ctl_table;
struct audit_krule;
struct user_namespace;
struct timezone;

enum lsm_event {
 LSM_POLICY_CHANGE,
};
# 110 "../include/linux/security.h"
enum lockdown_reason {
 LOCKDOWN_NONE,
 LOCKDOWN_MODULE_SIGNATURE,
 LOCKDOWN_DEV_MEM,
 LOCKDOWN_EFI_TEST,
 LOCKDOWN_KEXEC,
 LOCKDOWN_HIBERNATION,
 LOCKDOWN_PCI_ACCESS,
 LOCKDOWN_IOPORT,
 LOCKDOWN_MSR,
 LOCKDOWN_ACPI_TABLES,
 LOCKDOWN_DEVICE_TREE,
 LOCKDOWN_PCMCIA_CIS,
 LOCKDOWN_TIOCSSERIAL,
 LOCKDOWN_MODULE_PARAMETERS,
 LOCKDOWN_MMIOTRACE,
 LOCKDOWN_DEBUGFS,
 LOCKDOWN_XMON_WR,
 LOCKDOWN_BPF_WRITE_USER,
 LOCKDOWN_DBG_WRITE_KERNEL,
 LOCKDOWN_RTAS_ERROR_INJECTION,
 LOCKDOWN_INTEGRITY_MAX,
 LOCKDOWN_KCORE,
 LOCKDOWN_KPROBES,
 LOCKDOWN_BPF_READ_KERNEL,
 LOCKDOWN_DBG_READ_KERNEL,
 LOCKDOWN_PERF,
 LOCKDOWN_TRACEFS,
 LOCKDOWN_XMON_RW,
 LOCKDOWN_XFRM_SECRET,
 LOCKDOWN_CONFIDENTIALITY_MAX,
};

extern const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1];
extern u32 lsm_active_cnt;
extern const struct lsm_id *lsm_idlist[];


extern int cap_capable(const struct cred *cred, struct user_namespace *ns,
         int cap, unsigned int opts);
extern int cap_settime(const struct timespec64 *ts, const struct timezone *tz);
extern int cap_ptrace_access_check(struct task_struct *child, unsigned int mode);
extern int cap_ptrace_traceme(struct task_struct *parent);
extern int cap_capget(const struct task_struct *target, kernel_cap_t *effective,
        kernel_cap_t *inheritable, kernel_cap_t *permitted);
extern int cap_capset(struct cred *new, const struct cred *old,
        const kernel_cap_t *effective,
        const kernel_cap_t *inheritable,
        const kernel_cap_t *permitted);
extern int cap_bprm_creds_from_file(struct linux_binprm *bprm, const struct file *file);
int cap_inode_setxattr(struct dentry *dentry, const char *name,
         const void *value, size_t size, int flags);
int cap_inode_removexattr(struct mnt_idmap *idmap,
     struct dentry *dentry, const char *name);
int cap_inode_need_killpriv(struct dentry *dentry);
int cap_inode_killpriv(struct mnt_idmap *idmap, struct dentry *dentry);
int cap_inode_getsecurity(struct mnt_idmap *idmap,
     struct inode *inode, const char *name, void **buffer,
     bool alloc);
extern int cap_mmap_addr(unsigned long addr);
extern int cap_mmap_file(struct file *file, unsigned long reqprot,
    unsigned long prot, unsigned long flags);
extern int cap_task_fix_setuid(struct cred *new, const struct cred *old, int flags);
extern int cap_task_prctl(int option, unsigned long arg2, unsigned long arg3,
     unsigned long arg4, unsigned long arg5);
extern int cap_task_setscheduler(struct task_struct *p);
extern int cap_task_setioprio(struct task_struct *p, int ioprio);
extern int cap_task_setnice(struct task_struct *p, int nice);
extern int cap_vm_enough_memory(struct mm_struct *mm, long pages);

struct msghdr;
struct sk_buff;
struct sock;
struct sockaddr;
struct socket;
struct flowi_common;
struct dst_entry;
struct xfrm_selector;
struct xfrm_policy;
struct xfrm_state;
struct xfrm_user_sec_ctx;
struct seq_file;
struct sctp_association;


extern unsigned long mmap_min_addr;
extern unsigned long dac_mmap_min_addr;
# 222 "../include/linux/security.h"
struct sched_param;
struct request_sock;







extern int mmap_min_addr_handler(const struct ctl_table *table, int write,
     void *buffer, size_t *lenp, loff_t *ppos);



typedef int (*initxattrs) (struct inode *inode,
      const struct xattr *xattr_array, void *fs_data);






enum kernel_load_data_id {
 LOADING_UNKNOWN, LOADING_FIRMWARE, LOADING_MODULE, LOADING_KEXEC_IMAGE, LOADING_KEXEC_INITRAMFS, LOADING_POLICY, LOADING_X509_CERTIFICATE, LOADING_MAX_ID,
};

static const char * const kernel_load_data_str[] = {
 "unknown", "firmware", "kernel-module", "kexec-image", "kexec-initramfs", "security-policy", "x509-certificate", "",
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *kernel_load_data_id_str(enum kernel_load_data_id id)
{
 if ((unsigned)id >= LOADING_MAX_ID)
  return kernel_load_data_str[LOADING_UNKNOWN];

 return kernel_load_data_str[id];
}
# 514 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int call_blocking_lsm_notifier(enum lsm_event event, void *data)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int register_blocking_lsm_notifier(struct notifier_block *nb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int unregister_blocking_lsm_notifier(struct notifier_block *nb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 lsm_name_to_attr(const char *name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_free_mnt_opts(void **mnt_opts)
{
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_init(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int early_security_init(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_binder_set_context_mgr(const struct cred *mgr)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_binder_transaction(const struct cred *from,
           const struct cred *to)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_binder_transfer_binder(const struct cred *from,
        const struct cred *to)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_binder_transfer_file(const struct cred *from,
      const struct cred *to,
      const struct file *file)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ptrace_access_check(struct task_struct *child,
          unsigned int mode)
{
 return cap_ptrace_access_check(child, mode);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ptrace_traceme(struct task_struct *parent)
{
 return cap_ptrace_traceme(parent);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_capget(const struct task_struct *target,
       kernel_cap_t *effective,
       kernel_cap_t *inheritable,
       kernel_cap_t *permitted)
{
 return cap_capget(target, effective, inheritable, permitted);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_capset(struct cred *new,
       const struct cred *old,
       const kernel_cap_t *effective,
       const kernel_cap_t *inheritable,
       const kernel_cap_t *permitted)
{
 return cap_capset(new, old, effective, inheritable, permitted);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_capable(const struct cred *cred,
       struct user_namespace *ns,
       int cap,
       unsigned int opts)
{
 return cap_capable(cred, ns, cap, opts);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_quotactl(int cmds, int type, int id,
         const struct super_block *sb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_quota_on(struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_syslog(int type)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_settime64(const struct timespec64 *ts,
         const struct timezone *tz)
{
 return cap_settime(ts, tz);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
{
 return __vm_enough_memory(mm, pages, cap_vm_enough_memory(mm, pages));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bprm_creds_for_exec(struct linux_binprm *bprm)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bprm_creds_from_file(struct linux_binprm *bprm,
      const struct file *file)
{
 return cap_bprm_creds_from_file(bprm, file);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bprm_check(struct linux_binprm *bprm)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_bprm_committing_creds(const struct linux_binprm *bprm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_bprm_committed_creds(const struct linux_binprm *bprm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_fs_context_submount(struct fs_context *fc,
        struct super_block *reference)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_fs_context_dup(struct fs_context *fc,
       struct fs_context *src_fc)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_fs_context_parse_param(struct fs_context *fc,
        struct fs_parameter *param)
{
 return -519;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_alloc(struct super_block *sb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sb_delete(struct super_block *sb)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sb_free(struct super_block *sb)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_eat_lsm_opts(char *options,
        void **mnt_opts)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_remount(struct super_block *sb,
          void *mnt_opts)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_mnt_opts_compat(struct super_block *sb,
           void *mnt_opts)
{
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_kern_mount(struct super_block *sb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_show_options(struct seq_file *m,
        struct super_block *sb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_statfs(struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_mount(const char *dev_name, const struct path *path,
        const char *type, unsigned long flags,
        void *data)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_umount(struct vfsmount *mnt, int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_pivotroot(const struct path *old_path,
     const struct path *new_path)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_set_mnt_opts(struct super_block *sb,
        void *mnt_opts,
        unsigned long kern_flags,
        unsigned long *set_kern_flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sb_clone_mnt_opts(const struct super_block *oldsb,
           struct super_block *newsb,
           unsigned long kern_flags,
           unsigned long *set_kern_flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_move_mount(const struct path *from_path,
          const struct path *to_path)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_notify(const struct path *path, u64 mask,
    unsigned int obj_type)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_alloc(struct inode *inode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_free(struct inode *inode)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_dentry_init_security(struct dentry *dentry,
       int mode,
       const struct qstr *name,
       const char **xattr_name,
       void **ctx,
       u32 *ctxlen)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_dentry_create_files_as(struct dentry *dentry,
        int mode, struct qstr *name,
        const struct cred *old,
        struct cred *new)
{
 return 0;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_init_security(struct inode *inode,
      struct inode *dir,
      const struct qstr *qstr,
      const initxattrs xattrs,
      void *fs_data)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_init_security_anon(struct inode *inode,
          const struct qstr *name,
          const struct inode *context_inode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_create(struct inode *dir,
      struct dentry *dentry,
      umode_t mode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
security_inode_post_create_tmpfile(struct mnt_idmap *idmap, struct inode *inode)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_link(struct dentry *old_dentry,
           struct inode *dir,
           struct dentry *new_dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_unlink(struct inode *dir,
      struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_symlink(struct inode *dir,
       struct dentry *dentry,
       const char *old_name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_mkdir(struct inode *dir,
     struct dentry *dentry,
     int mode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_rmdir(struct inode *dir,
     struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_mknod(struct inode *dir,
     struct dentry *dentry,
     int mode, dev_t dev)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_rename(struct inode *old_dir,
      struct dentry *old_dentry,
      struct inode *new_dir,
      struct dentry *new_dentry,
      unsigned int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_readlink(struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_follow_link(struct dentry *dentry,
          struct inode *inode,
          bool rcu)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_permission(struct inode *inode, int mask)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_setattr(struct mnt_idmap *idmap,
      struct dentry *dentry,
      struct iattr *attr)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
security_inode_post_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
       int ia_valid)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_getattr(const struct path *path)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_setxattr(struct mnt_idmap *idmap,
  struct dentry *dentry, const char *name, const void *value,
  size_t size, int flags)
{
 return cap_inode_setxattr(dentry, name, value, size, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_set_acl(struct mnt_idmap *idmap,
      struct dentry *dentry,
      const char *acl_name,
      struct posix_acl *kacl)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_post_set_acl(struct dentry *dentry,
            const char *acl_name,
            struct posix_acl *kacl)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_get_acl(struct mnt_idmap *idmap,
      struct dentry *dentry,
      const char *acl_name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_remove_acl(struct mnt_idmap *idmap,
         struct dentry *dentry,
         const char *acl_name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_post_remove_acl(struct mnt_idmap *idmap,
        struct dentry *dentry,
        const char *acl_name)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_post_setxattr(struct dentry *dentry,
  const char *name, const void *value, size_t size, int flags)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_getxattr(struct dentry *dentry,
   const char *name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_listxattr(struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_removexattr(struct mnt_idmap *idmap,
          struct dentry *dentry,
          const char *name)
{
 return cap_inode_removexattr(idmap, dentry, name);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_post_removexattr(struct dentry *dentry,
         const char *name)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_need_killpriv(struct dentry *dentry)
{
 return cap_inode_need_killpriv(dentry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_killpriv(struct mnt_idmap *idmap,
       struct dentry *dentry)
{
 return cap_inode_killpriv(idmap, dentry);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_getsecurity(struct mnt_idmap *idmap,
          struct inode *inode,
          const char *name, void **buffer,
          bool alloc)
{
 return cap_inode_getsecurity(idmap, inode, name, buffer, alloc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_setsecurity(struct inode *inode, const char *name, const void *value, size_t size, int flags)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_listsecurity(struct inode *inode, char *buffer, size_t buffer_size)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_getsecid(struct inode *inode, u32 *secid)
{
 *secid = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_copy_up(struct dentry *src, struct cred **new)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernfs_init_security(struct kernfs_node *kn_dir,
      struct kernfs_node *kn)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_copy_up_xattr(struct dentry *src, const char *name)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_permission(struct file *file, int mask)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_alloc(struct file *file)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_file_release(struct file *file)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_file_free(struct file *file)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_ioctl(struct file *file, unsigned int cmd,
          unsigned long arg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_ioctl_compat(struct file *file,
          unsigned int cmd,
          unsigned long arg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_mmap_file(struct file *file, unsigned long prot,
         unsigned long flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_mmap_addr(unsigned long addr)
{
 return cap_mmap_addr(addr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_mprotect(struct vm_area_struct *vma,
      unsigned long reqprot,
      unsigned long prot)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_lock(struct file *file, unsigned int cmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_fcntl(struct file *file, unsigned int cmd,
          unsigned long arg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_file_set_fowner(struct file *file)
{
 return;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_send_sigiotask(struct task_struct *tsk,
            struct fown_struct *fown,
            int sig)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_receive(struct file *file)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_open(struct file *file)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_post_open(struct file *file, int mask)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_file_truncate(struct file *file)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_alloc(struct task_struct *task,
          unsigned long clone_flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_task_free(struct task_struct *task)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_cred_alloc_blank(struct cred *cred, gfp_t gfp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_cred_free(struct cred *cred)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_prepare_creds(struct cred *new,
      const struct cred *old,
      gfp_t gfp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_transfer_creds(struct cred *new,
        const struct cred *old)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_cred_getsecid(const struct cred *c, u32 *secid)
{
 *secid = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_act_as(struct cred *cred, u32 secid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_create_files_as(struct cred *cred,
        struct inode *inode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_module_request(char *kmod_name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_load_data(enum kernel_load_data_id id, bool contents)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_post_load_data(char *buf, loff_t size,
       enum kernel_load_data_id id,
       char *description)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_read_file(struct file *file,
         enum kernel_read_file_id id,
         bool contents)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_kernel_post_read_file(struct file *file,
       char *buf, loff_t size,
       enum kernel_read_file_id id)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_fix_setuid(struct cred *new,
        const struct cred *old,
        int flags)
{
 return cap_task_fix_setuid(new, old, flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_fix_setgid(struct cred *new,
        const struct cred *old,
        int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_fix_setgroups(struct cred *new,
        const struct cred *old)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_setpgid(struct task_struct *p, pid_t pgid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_getpgid(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_getsid(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_current_getsecid_subj(u32 *secid)
{
 *secid = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_task_getsecid_obj(struct task_struct *p, u32 *secid)
{
 *secid = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_setnice(struct task_struct *p, int nice)
{
 return cap_task_setnice(p, nice);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_setioprio(struct task_struct *p, int ioprio)
{
 return cap_task_setioprio(p, ioprio);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_getioprio(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_prlimit(const struct cred *cred,
     const struct cred *tcred,
     unsigned int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_setrlimit(struct task_struct *p,
       unsigned int resource,
       struct rlimit *new_rlim)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_setscheduler(struct task_struct *p)
{
 return cap_task_setscheduler(p);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_getscheduler(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_movememory(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_kill(struct task_struct *p,
         struct kernel_siginfo *info, int sig,
         const struct cred *cred)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_task_prctl(int option, unsigned long arg2,
          unsigned long arg3,
          unsigned long arg4,
          unsigned long arg5)
{
 return cap_task_prctl(option, arg2, arg3, arg4, arg5);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_task_to_inode(struct task_struct *p, struct inode *inode)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_create_user_ns(const struct cred *cred)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ipc_permission(struct kern_ipc_perm *ipcp,
       short flag)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_ipc_getsecid(struct kern_ipc_perm *ipcp, u32 *secid)
{
 *secid = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_msg_alloc(struct msg_msg *msg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_msg_msg_free(struct msg_msg *msg)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_queue_alloc(struct kern_ipc_perm *msq)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_msg_queue_free(struct kern_ipc_perm *msq)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_queue_associate(struct kern_ipc_perm *msq,
            int msqflg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_queue_msgctl(struct kern_ipc_perm *msq, int cmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_queue_msgsnd(struct kern_ipc_perm *msq,
         struct msg_msg *msg, int msqflg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_msg_queue_msgrcv(struct kern_ipc_perm *msq,
         struct msg_msg *msg,
         struct task_struct *target,
         long type, int mode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_shm_alloc(struct kern_ipc_perm *shp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_shm_free(struct kern_ipc_perm *shp)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_shm_associate(struct kern_ipc_perm *shp,
      int shmflg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_shm_shmctl(struct kern_ipc_perm *shp, int cmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_shm_shmat(struct kern_ipc_perm *shp,
         char *shmaddr, int shmflg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sem_alloc(struct kern_ipc_perm *sma)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sem_free(struct kern_ipc_perm *sma)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sem_associate(struct kern_ipc_perm *sma, int semflg)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sem_semctl(struct kern_ipc_perm *sma, int cmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sem_semop(struct kern_ipc_perm *sma,
         struct sembuf *sops, unsigned nsops,
         int alter)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_d_instantiate(struct dentry *dentry,
       struct inode *inode)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_getselfattr(unsigned int attr,
           struct lsm_ctx *ctx,
           size_t *size, u32 flags)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_setselfattr(unsigned int attr,
           struct lsm_ctx *ctx,
           size_t size, u32 flags)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_getprocattr(struct task_struct *p, int lsmid,
           const char *name, char **value)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_setprocattr(int lsmid, char *name, void *value,
           size_t size)
{
 return -22;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_netlink_send(struct sock *sk, struct sk_buff *skb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ismaclabel(const char *name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_secctx_to_secid(const char *secdata,
        u32 seclen,
        u32 *secid)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_release_secctx(char *secdata, u32 seclen)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inode_invalidate_secctx(struct inode *inode)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_locked_down(enum lockdown_reason what)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int lsm_fill_user_ctx(struct lsm_ctx *uctx,
        u32 *uctx_len, void *val, size_t val_len,
        u64 id, u64 flags)
{
 return -95;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_post_notification(const struct cred *w_cred,
          const struct cred *cred,
          struct watch_notification *n)
{
 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_watch_key(struct key *key)
{
 return 0;
}
# 1567 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_unix_stream_connect(struct sock *sock,
            struct sock *other,
            struct sock *newsk)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_unix_may_send(struct socket *sock,
      struct socket *other)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_create(int family, int type,
      int protocol, int kern)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_post_create(struct socket *sock,
           int family,
           int type,
           int protocol, int kern)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_socketpair(struct socket *socka,
          struct socket *sockb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_bind(struct socket *sock,
           struct sockaddr *address,
           int addrlen)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_connect(struct socket *sock,
       struct sockaddr *address,
       int addrlen)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_listen(struct socket *sock, int backlog)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_accept(struct socket *sock,
      struct socket *newsock)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_sendmsg(struct socket *sock,
       struct msghdr *msg, int size)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_recvmsg(struct socket *sock,
       struct msghdr *msg, int size,
       int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_getsockname(struct socket *sock)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_getpeername(struct socket *sock)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_getsockopt(struct socket *sock,
          int level, int optname)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_setsockopt(struct socket *sock,
          int level, int optname)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_shutdown(struct socket *sock, int how)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sock_rcv_skb(struct sock *sk,
     struct sk_buff *skb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_getpeersec_stream(struct socket *sock,
          sockptr_t optval,
          sockptr_t optlen,
          unsigned int len)
{
 return -92;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_socket_getpeersec_dgram(struct socket *sock, struct sk_buff *skb, u32 *secid)
{
 return -92;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sk_alloc(struct sock *sk, int family, gfp_t priority)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sk_free(struct sock *sk)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sk_clone(const struct sock *sk, struct sock *newsk)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sk_classify_flow(const struct sock *sk,
          struct flowi_common *flic)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_req_classify_flow(const struct request_sock *req,
           struct flowi_common *flic)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sock_graft(struct sock *sk, struct socket *parent)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_inet_conn_request(const struct sock *sk,
   struct sk_buff *skb, struct request_sock *req)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inet_csk_clone(struct sock *newsk,
   const struct request_sock *req)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_inet_conn_established(struct sock *sk,
   struct sk_buff *skb)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_secmark_relabel_packet(u32 secid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_secmark_refcount_inc(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_secmark_refcount_dec(void)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_tun_dev_alloc_security(void **security)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_tun_dev_free_security(void *security)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_tun_dev_create(void)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_tun_dev_attach_queue(void *security)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_tun_dev_attach(struct sock *sk, void *security)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_tun_dev_open(void *security)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sctp_assoc_request(struct sctp_association *asoc,
           struct sk_buff *skb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sctp_bind_connect(struct sock *sk, int optname,
          struct sockaddr *address,
          int addrlen)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_sctp_sk_clone(struct sctp_association *asoc,
       struct sock *sk,
       struct sock *newsk)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_sctp_assoc_established(struct sctp_association *asoc,
        struct sk_buff *skb)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
{
 return 0;
}
# 1805 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ib_pkey_access(void *sec, u64 subnet_prefix, u16 pkey)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ib_endport_manage_subnet(void *sec, const char *dev_name, u8 port_num)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_ib_alloc_security(void **sec)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_ib_free_security(void *sec)
{
}
# 1846 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp,
          struct xfrm_user_sec_ctx *sec_ctx,
          gfp_t gfp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_policy_clone(struct xfrm_sec_ctx *old, struct xfrm_sec_ctx **new_ctxp)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_xfrm_policy_free(struct xfrm_sec_ctx *ctx)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_policy_delete(struct xfrm_sec_ctx *ctx)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_state_alloc(struct xfrm_state *x,
     struct xfrm_user_sec_ctx *sec_ctx)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_state_alloc_acquire(struct xfrm_state *x,
     struct xfrm_sec_ctx *polsec, u32 secid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_xfrm_state_free(struct xfrm_state *x)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_state_delete(struct xfrm_state *x)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
           struct xfrm_policy *xp,
           const struct flowi_common *flic)
{
 return 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_skb_classify_flow(struct sk_buff *skb,
           struct flowi_common *flic)
{
}
# 1931 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_unlink(const struct path *dir, struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_mkdir(const struct path *dir, struct dentry *dentry,
          umode_t mode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_rmdir(const struct path *dir, struct dentry *dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_mknod(const struct path *dir, struct dentry *dentry,
          umode_t mode, unsigned int dev)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_path_post_mknod(struct mnt_idmap *idmap,
         struct dentry *dentry)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_truncate(const struct path *path)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_symlink(const struct path *dir, struct dentry *dentry,
     const char *old_name)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_link(struct dentry *old_dentry,
         const struct path *new_dir,
         struct dentry *new_dentry)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_rename(const struct path *old_dir,
           struct dentry *old_dentry,
           const struct path *new_dir,
           struct dentry *new_dentry,
           unsigned int flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_chmod(const struct path *path, umode_t mode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_chown(const struct path *path, kuid_t uid, kgid_t gid)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_path_chroot(const struct path *path)
{
 return 0;
}
# 2014 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_key_alloc(struct key *key,
         const struct cred *cred,
         unsigned long flags)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_key_free(struct key *key)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_key_permission(key_ref_t key_ref,
       const struct cred *cred,
       enum key_need_perm need_perm)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_key_getsecurity(struct key *key, char **_buffer)
{
 *_buffer = ((void *)0);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_key_post_create_or_update(struct key *keyring,
            struct key *key,
            const void *payload,
            size_t payload_len,
            unsigned long flags,
            bool create)
{ }
# 2084 "../include/linux/security.h"
extern struct dentry *securityfs_create_file(const char *name, umode_t mode,
          struct dentry *parent, void *data,
          const struct file_operations *fops);
extern struct dentry *securityfs_create_dir(const char *name, struct dentry *parent);
struct dentry *securityfs_create_symlink(const char *name,
      struct dentry *parent,
      const char *target,
      const struct inode_operations *iops);
extern void securityfs_remove(struct dentry *dentry);
# 2125 "../include/linux/security.h"
union bpf_attr;
struct bpf_map;
struct bpf_prog;
struct bpf_token;
# 2145 "../include/linux/security.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf(int cmd, union bpf_attr *attr,
          unsigned int size)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_map(struct bpf_map *map, fmode_t fmode)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_prog(struct bpf_prog *prog)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_map_create(struct bpf_map *map, union bpf_attr *attr,
       struct bpf_token *token)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_bpf_map_free(struct bpf_map *map)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_prog_load(struct bpf_prog *prog, union bpf_attr *attr,
      struct bpf_token *token)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_bpf_prog_free(struct bpf_prog *prog)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_token_create(struct bpf_token *token, union bpf_attr *attr,
         struct path *path)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void security_bpf_token_free(struct bpf_token *token)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_token_cmd(const struct bpf_token *token, enum bpf_cmd cmd)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int security_bpf_token_capable(const struct bpf_token *token, int cap)
{
 return 0;
}
# 10 "../include/net/scm.h" 2



# 1 "../include/net/compat.h" 1





struct sock;



struct compat_msghdr {
 compat_uptr_t msg_name;
 compat_int_t msg_namelen;
 compat_uptr_t msg_iov;
 compat_size_t msg_iovlen;
 compat_uptr_t msg_control;
 compat_size_t msg_controllen;
 compat_uint_t msg_flags;
};

struct compat_mmsghdr {
 struct compat_msghdr msg_hdr;
 compat_uint_t msg_len;
};

struct compat_cmsghdr {
 compat_size_t cmsg_len;
 compat_int_t cmsg_level;
 compat_int_t cmsg_type;
};

struct compat_rtentry {
 u32 rt_pad1;
 struct sockaddr rt_dst;
 struct sockaddr rt_gateway;
 struct sockaddr rt_genmask;
 unsigned short rt_flags;
 short rt_pad2;
 u32 rt_pad3;
 unsigned char rt_tos;
 unsigned char rt_class;
 short rt_pad4;
 short rt_metric;
 compat_uptr_t rt_dev;
 u32 rt_mtu;
 u32 rt_window;
 unsigned short rt_irtt;
};

int __get_compat_msghdr(struct msghdr *kmsg, struct compat_msghdr *msg,
   struct sockaddr **save_addr);
int get_compat_msghdr(struct msghdr *, struct compat_msghdr *,
        struct sockaddr **, struct iovec **);
int put_cmsg_compat(struct msghdr*, int, int, int, void *);

int cmsghdr_from_user_compat_to_kern(struct msghdr *, struct sock *,
         unsigned char *, int);

struct compat_group_req {
 __u32 gr_interface;
 struct __kernel_sockaddr_storage gr_group
  __attribute__((__aligned__(4)));
} __attribute__((__packed__));

struct compat_group_source_req {
 __u32 gsr_interface;
 struct __kernel_sockaddr_storage gsr_group
  __attribute__((__aligned__(4)));
 struct __kernel_sockaddr_storage gsr_source
  __attribute__((__aligned__(4)));
} __attribute__((__packed__));

struct compat_group_filter {
 union {
  struct {
   __u32 gf_interface_aux;
   struct __kernel_sockaddr_storage gf_group_aux
    __attribute__((__aligned__(4)));
   __u32 gf_fmode_aux;
   __u32 gf_numsrc_aux;
   struct __kernel_sockaddr_storage gf_slist[1]
    __attribute__((__aligned__(4)));
  } __attribute__((__packed__));
  struct {
   __u32 gf_interface;
   struct __kernel_sockaddr_storage gf_group
    __attribute__((__aligned__(4)));
   __u32 gf_fmode;
   __u32 gf_numsrc;
   struct __kernel_sockaddr_storage gf_slist_flex[]
    __attribute__((__aligned__(4)));
  } __attribute__((__packed__));
 };
} __attribute__((__packed__));
# 14 "../include/net/scm.h" 2






struct scm_creds {
 u32 pid;
 kuid_t uid;
 kgid_t gid;
};


struct unix_edge;


struct scm_fp_list {
 short count;
 short count_unix;
 short max;

 bool inflight;
 bool dead;
 struct list_head vertices;
 struct unix_edge *edges;

 struct user_struct *user;
 struct file *fp[253];
};

struct scm_cookie {
 struct pid *pid;
 struct scm_fp_list *fp;
 struct scm_creds creds;



};

void scm_detach_fds(struct msghdr *msg, struct scm_cookie *scm);
void scm_detach_fds_compat(struct msghdr *msg, struct scm_cookie *scm);
int __scm_send(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm);
void __scm_destroy(struct scm_cookie *scm);
struct scm_fp_list *scm_fp_dup(struct scm_fp_list *fpl);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unix_get_peersec_dgram(struct socket *sock, struct scm_cookie *scm)
{ }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_set_cred(struct scm_cookie *scm,
        struct pid *pid, kuid_t uid, kgid_t gid)
{
 scm->pid = get_pid(pid);
 scm->creds.pid = pid_vnr(pid);
 scm->creds.uid = uid;
 scm->creds.gid = gid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_destroy_cred(struct scm_cookie *scm)
{
 put_pid(scm->pid);
 scm->pid = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_destroy(struct scm_cookie *scm)
{
 scm_destroy_cred(scm);
 if (scm->fp)
  __scm_destroy(scm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int scm_send(struct socket *sock, struct msghdr *msg,
          struct scm_cookie *scm, bool forcecreds)
{
 memset(scm, 0, sizeof(*scm));
 scm->creds.uid = (kuid_t){ -1 };
 scm->creds.gid = (kgid_t){ -1 };
 if (forcecreds)
  scm_set_cred(scm, task_tgid((__current_thread_info->task)), (({ ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((1))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/scm.h", 98, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*((__current_thread_info->task)->cred)) *)(((__current_thread_info->task)->cred))); })->uid; })), (({ ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((1))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/scm.h", 98, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*((__current_thread_info->task)->cred)) *)(((__current_thread_info->task)->cred))); })->gid; })));
 unix_get_peersec_dgram(sock, scm);
 if (msg->msg_controllen <= 0)
  return 0;
 return __scm_send(sock, msg, scm);
}
# 127 "../include/net/scm.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_passec(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm)
{ }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool scm_has_secdata(struct socket *sock)
{
 return false;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_pidfd_recv(struct msghdr *msg, struct scm_cookie *scm)
{
 struct file *pidfd_file = ((void *)0);
 int len, pidfd;




 if (msg->msg_flags & 0)
  len = sizeof(struct compat_cmsghdr) + sizeof(int);
 else
  len = sizeof(struct cmsghdr) + sizeof(int);

 if (msg->msg_controllen < len) {
  msg->msg_flags |= 8;
  return;
 }

 if (!scm->pid)
  return;

 pidfd = pidfd_prepare(scm->pid, 0, &pidfd_file);

 if (put_cmsg(msg, 1, 0x04, sizeof(int), &pidfd)) {
  if (pidfd_file) {
   put_unused_fd(pidfd);
   fput(pidfd_file);
  }

  return;
 }

 if (pidfd_file)
  fd_install(pidfd, pidfd_file);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __scm_recv_common(struct socket *sock, struct msghdr *msg,
         struct scm_cookie *scm, int flags)
{
 if (!msg->msg_control) {
  if (((__builtin_constant_p(3) && __builtin_constant_p((uintptr_t)(&sock->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sock->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sock->flags))) ? const_test_bit(3, &sock->flags) : arch_test_bit(3, &sock->flags)) ||
      ((__builtin_constant_p(7) && __builtin_constant_p((uintptr_t)(&sock->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sock->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sock->flags))) ? const_test_bit(7, &sock->flags) : arch_test_bit(7, &sock->flags)) ||
      scm->fp || scm_has_secdata(sock))
   msg->msg_flags |= 8;
  scm_destroy(scm);
  return false;
 }

 if (((__builtin_constant_p(3) && __builtin_constant_p((uintptr_t)(&sock->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sock->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sock->flags))) ? const_test_bit(3, &sock->flags) : arch_test_bit(3, &sock->flags))) {
  struct user_namespace *current_ns = current_user_ns();
  struct ucred ucreds = {
   .pid = scm->creds.pid,
   .uid = from_kuid_munged(current_ns, scm->creds.uid),
   .gid = from_kgid_munged(current_ns, scm->creds.gid),
  };
  put_cmsg(msg, 1, 0x02, sizeof(ucreds), &ucreds);
 }

 scm_passec(sock, msg, scm);

 if (scm->fp)
  scm_detach_fds(msg, scm);

 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_recv(struct socket *sock, struct msghdr *msg,
       struct scm_cookie *scm, int flags)
{
 if (!__scm_recv_common(sock, msg, scm, flags))
  return;

 scm_destroy_cred(scm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void scm_recv_unix(struct socket *sock, struct msghdr *msg,
     struct scm_cookie *scm, int flags)
{
 if (!__scm_recv_common(sock, msg, scm, flags))
  return;

 if (((__builtin_constant_p(7) && __builtin_constant_p((uintptr_t)(&sock->flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sock->flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sock->flags))) ? const_test_bit(7, &sock->flags) : arch_test_bit(7, &sock->flags)))
  scm_pidfd_recv(msg, scm);

 scm_destroy_cred(scm);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int scm_recv_one_fd(struct file *f, int *ufd,
      unsigned int flags)
{
 if (!ufd)
  return -14;
 return receive_fd(f, ufd, flags);
}
# 10 "../include/linux/netlink.h" 2
# 1 "../include/uapi/linux/netlink.h" 1
# 37 "../include/uapi/linux/netlink.h"
struct sockaddr_nl {
 __kernel_sa_family_t nl_family;
 unsigned short nl_pad;
 __u32 nl_pid;
        __u32 nl_groups;
};
# 52 "../include/uapi/linux/netlink.h"
struct nlmsghdr {
 __u32 nlmsg_len;
 __u16 nlmsg_type;
 __u16 nlmsg_flags;
 __u32 nlmsg_seq;
 __u32 nlmsg_pid;
};
# 119 "../include/uapi/linux/netlink.h"
struct nlmsgerr {
 int error;
 struct nlmsghdr msg;
# 131 "../include/uapi/linux/netlink.h"
};
# 150 "../include/uapi/linux/netlink.h"
enum nlmsgerr_attrs {
 NLMSGERR_ATTR_UNUSED,
 NLMSGERR_ATTR_MSG,
 NLMSGERR_ATTR_OFFS,
 NLMSGERR_ATTR_COOKIE,
 NLMSGERR_ATTR_POLICY,
 NLMSGERR_ATTR_MISS_TYPE,
 NLMSGERR_ATTR_MISS_NEST,

 __NLMSGERR_ATTR_MAX,
 NLMSGERR_ATTR_MAX = __NLMSGERR_ATTR_MAX - 1
};
# 178 "../include/uapi/linux/netlink.h"
struct nl_pktinfo {
 __u32 group;
};

struct nl_mmap_req {
 unsigned int nm_block_size;
 unsigned int nm_block_nr;
 unsigned int nm_frame_size;
 unsigned int nm_frame_nr;
};

struct nl_mmap_hdr {
 unsigned int nm_status;
 unsigned int nm_len;
 __u32 nm_group;

 __u32 nm_pid;
 __u32 nm_uid;
 __u32 nm_gid;
};
# 215 "../include/uapi/linux/netlink.h"
enum {
 NETLINK_UNCONNECTED = 0,
 NETLINK_CONNECTED,
};
# 229 "../include/uapi/linux/netlink.h"
struct nlattr {
 __u16 nla_len;
 __u16 nla_type;
};
# 265 "../include/uapi/linux/netlink.h"
struct nla_bitfield32 {
 __u32 value;
 __u32 selector;
};
# 304 "../include/uapi/linux/netlink.h"
enum netlink_attribute_type {
 NL_ATTR_TYPE_INVALID,

 NL_ATTR_TYPE_FLAG,

 NL_ATTR_TYPE_U8,
 NL_ATTR_TYPE_U16,
 NL_ATTR_TYPE_U32,
 NL_ATTR_TYPE_U64,

 NL_ATTR_TYPE_S8,
 NL_ATTR_TYPE_S16,
 NL_ATTR_TYPE_S32,
 NL_ATTR_TYPE_S64,

 NL_ATTR_TYPE_BINARY,
 NL_ATTR_TYPE_STRING,
 NL_ATTR_TYPE_NUL_STRING,

 NL_ATTR_TYPE_NESTED,
 NL_ATTR_TYPE_NESTED_ARRAY,

 NL_ATTR_TYPE_BITFIELD32,

 NL_ATTR_TYPE_SINT,
 NL_ATTR_TYPE_UINT,
};
# 363 "../include/uapi/linux/netlink.h"
enum netlink_policy_type_attr {
 NL_POLICY_TYPE_ATTR_UNSPEC,
 NL_POLICY_TYPE_ATTR_TYPE,
 NL_POLICY_TYPE_ATTR_MIN_VALUE_S,
 NL_POLICY_TYPE_ATTR_MAX_VALUE_S,
 NL_POLICY_TYPE_ATTR_MIN_VALUE_U,
 NL_POLICY_TYPE_ATTR_MAX_VALUE_U,
 NL_POLICY_TYPE_ATTR_MIN_LENGTH,
 NL_POLICY_TYPE_ATTR_MAX_LENGTH,
 NL_POLICY_TYPE_ATTR_POLICY_IDX,
 NL_POLICY_TYPE_ATTR_POLICY_MAXTYPE,
 NL_POLICY_TYPE_ATTR_BITFIELD32_MASK,
 NL_POLICY_TYPE_ATTR_PAD,
 NL_POLICY_TYPE_ATTR_MASK,


 __NL_POLICY_TYPE_ATTR_MAX,
 NL_POLICY_TYPE_ATTR_MAX = __NL_POLICY_TYPE_ATTR_MAX - 1
};
# 11 "../include/linux/netlink.h" 2

struct net;

void do_trace_netlink_extack(const char *msg);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlmsghdr *nlmsg_hdr(const struct sk_buff *skb)
{
 return (struct nlmsghdr *)skb->data;
}

enum netlink_skb_flags {
 NETLINK_SKB_DST = 0x8,
};

struct netlink_skb_parms {
 struct scm_creds creds;
 __u32 portid;
 __u32 dst_group;
 __u32 flags;
 struct sock *sk;
 bool nsid_is_set;
 int nsid;
};





void netlink_table_grab(void);
void netlink_table_ungrab(void);





struct netlink_kernel_cfg {
 unsigned int groups;
 unsigned int flags;
 void (*input)(struct sk_buff *skb);
 int (*bind)(struct net *net, int group);
 void (*unbind)(struct net *net, int group);
 void (*release) (struct sock *sk, unsigned long *groups);
};

struct sock *__netlink_kernel_create(struct net *net, int unit,
         struct module *module,
         struct netlink_kernel_cfg *cfg);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *
netlink_kernel_create(struct net *net, int unit, struct netlink_kernel_cfg *cfg)
{
 return __netlink_kernel_create(net, unit, ((struct module *)0), cfg);
}
# 81 "../include/linux/netlink.h"
struct netlink_ext_ack {
 const char *_msg;
 const struct nlattr *bad_attr;
 const struct nla_policy *policy;
 const struct nlattr *miss_nest;
 u16 miss_type;
 u8 cookie[20];
 u8 cookie_len;
 char _msg_buf[80];
};
# 209 "../include/linux/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nl_set_extack_cookie_u64(struct netlink_ext_ack *extack,
         u64 cookie)
{
 if (!extack)
  return;
 memcpy(extack->cookie, &cookie, sizeof(cookie));
 extack->cookie_len = sizeof(cookie);
}

void netlink_kernel_release(struct sock *sk);
int __netlink_change_ngroups(struct sock *sk, unsigned int groups);
int netlink_change_ngroups(struct sock *sk, unsigned int groups);
void __netlink_clear_multicast_users(struct sock *sk, unsigned int group);
void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
   const struct netlink_ext_ack *extack);
int netlink_has_listeners(struct sock *sk, unsigned int group);
bool netlink_strict_get_check(struct sk_buff *skb);

int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock);
int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid,
        __u32 group, gfp_t allocation);

typedef int (*netlink_filter_fn)(struct sock *dsk, struct sk_buff *skb, void *data);

int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb,
          __u32 portid, __u32 group, gfp_t allocation,
          netlink_filter_fn filter,
          void *filter_data);
int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code);
int netlink_register_notifier(struct notifier_block *nb);
int netlink_unregister_notifier(struct notifier_block *nb);


struct sock *netlink_getsockbyfilp(struct file *filp);
int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
        long *timeo, struct sock *ssk);
void netlink_detachskb(struct sock *sk, struct sk_buff *skb);
int netlink_sendskb(struct sock *sk, struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *
netlink_skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
{
 struct sk_buff *nskb;

 nskb = skb_clone(skb, gfp_mask);
 if (!nskb)
  return ((void *)0);


 if (is_vmalloc_addr(skb->head))
  nskb->destructor = skb->destructor;

 return nskb;
}
# 279 "../include/linux/netlink.h"
struct netlink_callback {
 struct sk_buff *skb;
 const struct nlmsghdr *nlh;
 int (*dump)(struct sk_buff * skb,
     struct netlink_callback *cb);
 int (*done)(struct netlink_callback *cb);
 void *data;

 struct module *module;
 struct netlink_ext_ack *extack;
 u16 family;
 u16 answer_flags;
 u32 min_dump_alloc;
 unsigned int prev_seq, seq;
 int flags;
 bool strict_check;
 union {
  u8 ctx[48];




  long args[6];
 };
};





struct netlink_notify {
 struct net *net;
 u32 portid;
 int protocol;
};

struct nlmsghdr *
__nlmsg_put(struct sk_buff *skb, u32 portid, u32 seq, int type, int len, int flags);

struct netlink_dump_control {
 int (*start)(struct netlink_callback *);
 int (*dump)(struct sk_buff *skb, struct netlink_callback *);
 int (*done)(struct netlink_callback *);
 struct netlink_ext_ack *extack;
 void *data;
 struct module *module;
 u32 min_dump_alloc;
 int flags;
};

int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
    const struct nlmsghdr *nlh,
    struct netlink_dump_control *control);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
         const struct nlmsghdr *nlh,
         struct netlink_dump_control *control)
{
 if (!control->module)
  control->module = ((struct module *)0);

 return __netlink_dump_start(ssk, skb, nlh, control);
}

struct netlink_tap {
 struct net_device *dev;
 struct module *module;
 struct list_head list;
};

int netlink_add_tap(struct netlink_tap *nt);
int netlink_remove_tap(struct netlink_tap *nt);

bool __netlink_ns_capable(const struct netlink_skb_parms *nsp,
     struct user_namespace *ns, int cap);
bool netlink_ns_capable(const struct sk_buff *skb,
   struct user_namespace *ns, int cap);
bool netlink_capable(const struct sk_buff *skb, int cap);
bool netlink_net_capable(const struct sk_buff *skb, int cap);
struct sk_buff *netlink_alloc_large_skb(unsigned int size, int broadcast);
# 20 "../include/linux/ethtool.h" 2
# 1 "../include/uapi/linux/ethtool.h" 1
# 105 "../include/uapi/linux/ethtool.h"
struct ethtool_cmd {
 __u32 cmd;
 __u32 supported;
 __u32 advertising;
 __u16 speed;
 __u8 duplex;
 __u8 port;
 __u8 phy_address;
 __u8 transceiver;
 __u8 autoneg;
 __u8 mdio_support;
 __u32 maxtxpkt;
 __u32 maxrxpkt;
 __u16 speed_hi;
 __u8 eth_tp_mdix;
 __u8 eth_tp_mdix_ctrl;
 __u32 lp_advertising;
 __u32 reserved[2];
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ethtool_cmd_speed_set(struct ethtool_cmd *ep,
      __u32 speed)
{
 ep->speed = (__u16)(speed & 0xFFFF);
 ep->speed_hi = (__u16)(speed >> 16);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 ethtool_cmd_speed(const struct ethtool_cmd *ep)
{
 return (ep->speed_hi << 16) | ep->speed;
}
# 185 "../include/uapi/linux/ethtool.h"
struct ethtool_drvinfo {
 __u32 cmd;
 char driver[32];
 char version[32];
 char fw_version[32];
 char bus_info[32];
 char erom_version[32];
 char reserved2[12];
 __u32 n_priv_flags;
 __u32 n_stats;
 __u32 testinfo_len;
 __u32 eedump_len;
 __u32 regdump_len;
};
# 211 "../include/uapi/linux/ethtool.h"
struct ethtool_wolinfo {
 __u32 cmd;
 __u32 supported;
 __u32 wolopts;
 __u8 sopass[6];
};


struct ethtool_value {
 __u32 cmd;
 __u32 data;
};




enum tunable_id {
 ETHTOOL_ID_UNSPEC,
 ETHTOOL_RX_COPYBREAK,
 ETHTOOL_TX_COPYBREAK,
 ETHTOOL_PFC_PREVENTION_TOUT,
 ETHTOOL_TX_COPYBREAK_BUF_SIZE,




 __ETHTOOL_TUNABLE_COUNT,
};

enum tunable_type_id {
 ETHTOOL_TUNABLE_UNSPEC,
 ETHTOOL_TUNABLE_U8,
 ETHTOOL_TUNABLE_U16,
 ETHTOOL_TUNABLE_U32,
 ETHTOOL_TUNABLE_U64,
 ETHTOOL_TUNABLE_STRING,
 ETHTOOL_TUNABLE_S8,
 ETHTOOL_TUNABLE_S16,
 ETHTOOL_TUNABLE_S32,
 ETHTOOL_TUNABLE_S64,
};

struct ethtool_tunable {
 __u32 cmd;
 __u32 id;
 __u32 type_id;
 __u32 len;
 void *data[];
};
# 292 "../include/uapi/linux/ethtool.h"
enum phy_tunable_id {
 ETHTOOL_PHY_ID_UNSPEC,
 ETHTOOL_PHY_DOWNSHIFT,
 ETHTOOL_PHY_FAST_LINK_DOWN,
 ETHTOOL_PHY_EDPD,




 __ETHTOOL_PHY_TUNABLE_COUNT,
};
# 319 "../include/uapi/linux/ethtool.h"
struct ethtool_regs {
 __u32 cmd;
 __u32 version;
 __u32 len;
 __u8 data[];
};
# 344 "../include/uapi/linux/ethtool.h"
struct ethtool_eeprom {
 __u32 cmd;
 __u32 magic;
 __u32 offset;
 __u32 len;
 __u8 data[];
};
# 370 "../include/uapi/linux/ethtool.h"
struct ethtool_eee {
 __u32 cmd;
 __u32 supported;
 __u32 advertised;
 __u32 lp_advertised;
 __u32 eee_active;
 __u32 eee_enabled;
 __u32 tx_lpi_enabled;
 __u32 tx_lpi_timer;
 __u32 reserved[2];
};
# 393 "../include/uapi/linux/ethtool.h"
struct ethtool_modinfo {
 __u32 cmd;
 __u32 type;
 __u32 eeprom_len;
 __u32 reserved[8];
};
# 473 "../include/uapi/linux/ethtool.h"
struct ethtool_coalesce {
 __u32 cmd;
 __u32 rx_coalesce_usecs;
 __u32 rx_max_coalesced_frames;
 __u32 rx_coalesce_usecs_irq;
 __u32 rx_max_coalesced_frames_irq;
 __u32 tx_coalesce_usecs;
 __u32 tx_max_coalesced_frames;
 __u32 tx_coalesce_usecs_irq;
 __u32 tx_max_coalesced_frames_irq;
 __u32 stats_block_coalesce_usecs;
 __u32 use_adaptive_rx_coalesce;
 __u32 use_adaptive_tx_coalesce;
 __u32 pkt_rate_low;
 __u32 rx_coalesce_usecs_low;
 __u32 rx_max_coalesced_frames_low;
 __u32 tx_coalesce_usecs_low;
 __u32 tx_max_coalesced_frames_low;
 __u32 pkt_rate_high;
 __u32 rx_coalesce_usecs_high;
 __u32 rx_max_coalesced_frames_high;
 __u32 tx_coalesce_usecs_high;
 __u32 tx_max_coalesced_frames_high;
 __u32 rate_sample_interval;
};
# 524 "../include/uapi/linux/ethtool.h"
struct ethtool_ringparam {
 __u32 cmd;
 __u32 rx_max_pending;
 __u32 rx_mini_max_pending;
 __u32 rx_jumbo_max_pending;
 __u32 tx_max_pending;
 __u32 rx_pending;
 __u32 rx_mini_pending;
 __u32 rx_jumbo_pending;
 __u32 tx_pending;
};
# 552 "../include/uapi/linux/ethtool.h"
struct ethtool_channels {
 __u32 cmd;
 __u32 max_rx;
 __u32 max_tx;
 __u32 max_other;
 __u32 max_combined;
 __u32 rx_count;
 __u32 tx_count;
 __u32 other_count;
 __u32 combined_count;
};
# 586 "../include/uapi/linux/ethtool.h"
struct ethtool_pauseparam {
 __u32 cmd;
 __u32 autoneg;
 __u32 rx_pause;
 __u32 tx_pause;
};


enum ethtool_link_ext_state {
 ETHTOOL_LINK_EXT_STATE_AUTONEG,
 ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE,
 ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH,
 ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY,
 ETHTOOL_LINK_EXT_STATE_NO_CABLE,
 ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE,
 ETHTOOL_LINK_EXT_STATE_EEPROM_ISSUE,
 ETHTOOL_LINK_EXT_STATE_CALIBRATION_FAILURE,
 ETHTOOL_LINK_EXT_STATE_POWER_BUDGET_EXCEEDED,
 ETHTOOL_LINK_EXT_STATE_OVERHEAT,
 ETHTOOL_LINK_EXT_STATE_MODULE,
};


enum ethtool_link_ext_substate_autoneg {
 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_PARTNER_DETECTED = 1,
 ETHTOOL_LINK_EXT_SUBSTATE_AN_ACK_NOT_RECEIVED,
 ETHTOOL_LINK_EXT_SUBSTATE_AN_NEXT_PAGE_EXCHANGE_FAILED,
 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_PARTNER_DETECTED_FORCE_MODE,
 ETHTOOL_LINK_EXT_SUBSTATE_AN_FEC_MISMATCH_DURING_OVERRIDE,
 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_HCD,
};



enum ethtool_link_ext_substate_link_training {
 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_FRAME_LOCK_NOT_ACQUIRED = 1,
 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_LINK_INHIBIT_TIMEOUT,
 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_LINK_PARTNER_DID_NOT_SET_RECEIVER_READY,
 ETHTOOL_LINK_EXT_SUBSTATE_LT_REMOTE_FAULT,
};



enum ethtool_link_ext_substate_link_logical_mismatch {
 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_ACQUIRE_BLOCK_LOCK = 1,
 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_ACQUIRE_AM_LOCK,
 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_GET_ALIGN_STATUS,
 ETHTOOL_LINK_EXT_SUBSTATE_LLM_FC_FEC_IS_NOT_LOCKED,
 ETHTOOL_LINK_EXT_SUBSTATE_LLM_RS_FEC_IS_NOT_LOCKED,
};



enum ethtool_link_ext_substate_bad_signal_integrity {
 ETHTOOL_LINK_EXT_SUBSTATE_BSI_LARGE_NUMBER_OF_PHYSICAL_ERRORS = 1,
 ETHTOOL_LINK_EXT_SUBSTATE_BSI_UNSUPPORTED_RATE,
 ETHTOOL_LINK_EXT_SUBSTATE_BSI_SERDES_REFERENCE_CLOCK_LOST,
 ETHTOOL_LINK_EXT_SUBSTATE_BSI_SERDES_ALOS,
};


enum ethtool_link_ext_substate_cable_issue {
 ETHTOOL_LINK_EXT_SUBSTATE_CI_UNSUPPORTED_CABLE = 1,
 ETHTOOL_LINK_EXT_SUBSTATE_CI_CABLE_TEST_FAILURE,
};


enum ethtool_link_ext_substate_module {
 ETHTOOL_LINK_EXT_SUBSTATE_MODULE_CMIS_NOT_READY = 1,
};
# 687 "../include/uapi/linux/ethtool.h"
enum ethtool_stringset {
 ETH_SS_TEST = 0,
 ETH_SS_STATS,
 ETH_SS_PRIV_FLAGS,
 ETH_SS_NTUPLE_FILTERS,
 ETH_SS_FEATURES,
 ETH_SS_RSS_HASH_FUNCS,
 ETH_SS_TUNABLES,
 ETH_SS_PHY_STATS,
 ETH_SS_PHY_TUNABLES,
 ETH_SS_LINK_MODES,
 ETH_SS_MSG_CLASSES,
 ETH_SS_WOL_MODES,
 ETH_SS_SOF_TIMESTAMPING,
 ETH_SS_TS_TX_TYPES,
 ETH_SS_TS_RX_FILTERS,
 ETH_SS_UDP_TUNNEL_TYPES,
 ETH_SS_STATS_STD,
 ETH_SS_STATS_ETH_PHY,
 ETH_SS_STATS_ETH_MAC,
 ETH_SS_STATS_ETH_CTRL,
 ETH_SS_STATS_RMON,


 ETH_SS_COUNT
};
# 726 "../include/uapi/linux/ethtool.h"
enum ethtool_mac_stats_src {
 ETHTOOL_MAC_STATS_SRC_AGGREGATE,
 ETHTOOL_MAC_STATS_SRC_EMAC,
 ETHTOOL_MAC_STATS_SRC_PMAC,
};
# 740 "../include/uapi/linux/ethtool.h"
enum ethtool_module_power_mode_policy {
 ETHTOOL_MODULE_POWER_MODE_POLICY_HIGH = 1,
 ETHTOOL_MODULE_POWER_MODE_POLICY_AUTO,
};






enum ethtool_module_power_mode {
 ETHTOOL_MODULE_POWER_MODE_LOW = 1,
 ETHTOOL_MODULE_POWER_MODE_HIGH,
};
# 772 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_state {
 ETHTOOL_C33_PSE_EXT_STATE_ERROR_CONDITION = 1,
 ETHTOOL_C33_PSE_EXT_STATE_MR_MPS_VALID,
 ETHTOOL_C33_PSE_EXT_STATE_MR_PSE_ENABLE,
 ETHTOOL_C33_PSE_EXT_STATE_OPTION_DETECT_TED,
 ETHTOOL_C33_PSE_EXT_STATE_OPTION_VPORT_LIM,
 ETHTOOL_C33_PSE_EXT_STATE_OVLD_DETECTED,
 ETHTOOL_C33_PSE_EXT_STATE_PD_DLL_POWER_TYPE,
 ETHTOOL_C33_PSE_EXT_STATE_POWER_NOT_AVAILABLE,
 ETHTOOL_C33_PSE_EXT_STATE_SHORT_DETECTED,
};
# 797 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_mr_mps_valid {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_MR_MPS_VALID_DETECTED_UNDERLOAD = 1,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_MR_MPS_VALID_CONNECTION_OPEN,
};
# 830 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_error_condition {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_NON_EXISTING_PORT = 1,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_UNDEFINED_PORT,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_INTERNAL_HW_FAULT,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_COMM_ERROR_AFTER_FORCE_ON,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_UNKNOWN_PORT_STATUS,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_HOST_CRASH_TURN_OFF,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_HOST_CRASH_FORCE_SHUTDOWN,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_CONFIG_CHANGE,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_ERROR_CONDITION_DETECTED_OVER_TEMP,
};
# 852 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_mr_pse_enable {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_MR_PSE_ENABLE_DISABLE_PIN_ACTIVE = 1,
};
# 868 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_option_detect_ted {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OPTION_DETECT_TED_DET_IN_PROCESS = 1,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OPTION_DETECT_TED_CONNECTION_CHECK_ERROR,
};
# 887 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_option_vport_lim {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OPTION_VPORT_LIM_HIGH_VOLTAGE = 1,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OPTION_VPORT_LIM_LOW_VOLTAGE,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OPTION_VPORT_LIM_VOLTAGE_INJECTION,
};
# 903 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_ovld_detected {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_OVLD_DETECTED_OVERLOAD = 1,
};
# 925 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_power_not_available {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_POWER_NOT_AVAILABLE_BUDGET_EXCEEDED = 1,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_POWER_NOT_AVAILABLE_PORT_PW_LIMIT_EXCEEDS_CONTROLLER_BUDGET,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_POWER_NOT_AVAILABLE_PD_REQUEST_EXCEEDS_PORT_LIMIT,
 ETHTOOL_C33_PSE_EXT_SUBSTATE_POWER_NOT_AVAILABLE_HW_PW_LIMIT,
};
# 942 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_ext_substate_short_detected {
 ETHTOOL_C33_PSE_EXT_SUBSTATE_SHORT_DETECTED_SHORT_CONDITION = 1,
};







enum ethtool_pse_types {
 ETHTOOL_PSE_UNKNOWN = 1 << 0,
 ETHTOOL_PSE_PODL = 1 << 1,
 ETHTOOL_PSE_C33 = 1 << 2,
};
# 965 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_admin_state {
 ETHTOOL_C33_PSE_ADMIN_STATE_UNKNOWN = 1,
 ETHTOOL_C33_PSE_ADMIN_STATE_DISABLED,
 ETHTOOL_C33_PSE_ADMIN_STATE_ENABLED,
};
# 991 "../include/uapi/linux/ethtool.h"
enum ethtool_c33_pse_pw_d_status {
 ETHTOOL_C33_PSE_PW_D_STATUS_UNKNOWN = 1,
 ETHTOOL_C33_PSE_PW_D_STATUS_DISABLED,
 ETHTOOL_C33_PSE_PW_D_STATUS_SEARCHING,
 ETHTOOL_C33_PSE_PW_D_STATUS_DELIVERING,
 ETHTOOL_C33_PSE_PW_D_STATUS_TEST,
 ETHTOOL_C33_PSE_PW_D_STATUS_FAULT,
 ETHTOOL_C33_PSE_PW_D_STATUS_OTHERFAULT,
};
# 1009 "../include/uapi/linux/ethtool.h"
enum ethtool_podl_pse_admin_state {
 ETHTOOL_PODL_PSE_ADMIN_STATE_UNKNOWN = 1,
 ETHTOOL_PODL_PSE_ADMIN_STATE_DISABLED,
 ETHTOOL_PODL_PSE_ADMIN_STATE_ENABLED,
};
# 1036 "../include/uapi/linux/ethtool.h"
enum ethtool_podl_pse_pw_d_status {
 ETHTOOL_PODL_PSE_PW_D_STATUS_UNKNOWN = 1,
 ETHTOOL_PODL_PSE_PW_D_STATUS_DISABLED,
 ETHTOOL_PODL_PSE_PW_D_STATUS_SEARCHING,
 ETHTOOL_PODL_PSE_PW_D_STATUS_DELIVERING,
 ETHTOOL_PODL_PSE_PW_D_STATUS_SLEEP,
 ETHTOOL_PODL_PSE_PW_D_STATUS_IDLE,
 ETHTOOL_PODL_PSE_PW_D_STATUS_ERROR,
};
# 1062 "../include/uapi/linux/ethtool.h"
enum ethtool_mm_verify_status {
 ETHTOOL_MM_VERIFY_STATUS_UNKNOWN,
 ETHTOOL_MM_VERIFY_STATUS_INITIAL,
 ETHTOOL_MM_VERIFY_STATUS_VERIFYING,
 ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED,
 ETHTOOL_MM_VERIFY_STATUS_FAILED,
 ETHTOOL_MM_VERIFY_STATUS_DISABLED,
};
# 1082 "../include/uapi/linux/ethtool.h"
enum ethtool_module_fw_flash_status {
 ETHTOOL_MODULE_FW_FLASH_STATUS_STARTED = 1,
 ETHTOOL_MODULE_FW_FLASH_STATUS_IN_PROGRESS,
 ETHTOOL_MODULE_FW_FLASH_STATUS_COMPLETED,
 ETHTOOL_MODULE_FW_FLASH_STATUS_ERROR,
};
# 1101 "../include/uapi/linux/ethtool.h"
struct ethtool_gstrings {
 __u32 cmd;
 __u32 string_set;
 __u32 len;
 __u8 data[];
};
# 1126 "../include/uapi/linux/ethtool.h"
struct ethtool_sset_info {
 __u32 cmd;
 __u32 reserved;
 __u64 sset_mask;
 __u32 data[];
};
# 1143 "../include/uapi/linux/ethtool.h"
enum ethtool_test_flags {
 ETH_TEST_FL_OFFLINE = (1 << 0),
 ETH_TEST_FL_FAILED = (1 << 1),
 ETH_TEST_FL_EXTERNAL_LB = (1 << 2),
 ETH_TEST_FL_EXTERNAL_LB_DONE = (1 << 3),
};
# 1165 "../include/uapi/linux/ethtool.h"
struct ethtool_test {
 __u32 cmd;
 __u32 flags;
 __u32 reserved;
 __u32 len;
 __u64 data[];
};
# 1184 "../include/uapi/linux/ethtool.h"
struct ethtool_stats {
 __u32 cmd;
 __u32 n_stats;
 __u64 data[];
};
# 1201 "../include/uapi/linux/ethtool.h"
struct ethtool_perm_addr {
 __u32 cmd;
 __u32 size;
 __u8 data[];
};
# 1216 "../include/uapi/linux/ethtool.h"
enum ethtool_flags {
 ETH_FLAG_TXVLAN = (1 << 7),
 ETH_FLAG_RXVLAN = (1 << 8),
 ETH_FLAG_LRO = (1 << 15),
 ETH_FLAG_NTUPLE = (1 << 27),
 ETH_FLAG_RXHASH = (1 << 28),
};
# 1240 "../include/uapi/linux/ethtool.h"
struct ethtool_tcpip4_spec {
 __be32 ip4src;
 __be32 ip4dst;
 __be16 psrc;
 __be16 pdst;
 __u8 tos;
};
# 1257 "../include/uapi/linux/ethtool.h"
struct ethtool_ah_espip4_spec {
 __be32 ip4src;
 __be32 ip4dst;
 __be32 spi;
 __u8 tos;
};
# 1275 "../include/uapi/linux/ethtool.h"
struct ethtool_usrip4_spec {
 __be32 ip4src;
 __be32 ip4dst;
 __be32 l4_4_bytes;
 __u8 tos;
 __u8 ip_ver;
 __u8 proto;
};
# 1294 "../include/uapi/linux/ethtool.h"
struct ethtool_tcpip6_spec {
 __be32 ip6src[4];
 __be32 ip6dst[4];
 __be16 psrc;
 __be16 pdst;
 __u8 tclass;
};
# 1311 "../include/uapi/linux/ethtool.h"
struct ethtool_ah_espip6_spec {
 __be32 ip6src[4];
 __be32 ip6dst[4];
 __be32 spi;
 __u8 tclass;
};
# 1326 "../include/uapi/linux/ethtool.h"
struct ethtool_usrip6_spec {
 __be32 ip6src[4];
 __be32 ip6dst[4];
 __be32 l4_4_bytes;
 __u8 tclass;
 __u8 l4_proto;
};

union ethtool_flow_union {
 struct ethtool_tcpip4_spec tcp_ip4_spec;
 struct ethtool_tcpip4_spec udp_ip4_spec;
 struct ethtool_tcpip4_spec sctp_ip4_spec;
 struct ethtool_ah_espip4_spec ah_ip4_spec;
 struct ethtool_ah_espip4_spec esp_ip4_spec;
 struct ethtool_usrip4_spec usr_ip4_spec;
 struct ethtool_tcpip6_spec tcp_ip6_spec;
 struct ethtool_tcpip6_spec udp_ip6_spec;
 struct ethtool_tcpip6_spec sctp_ip6_spec;
 struct ethtool_ah_espip6_spec ah_ip6_spec;
 struct ethtool_ah_espip6_spec esp_ip6_spec;
 struct ethtool_usrip6_spec usr_ip6_spec;
 struct ethhdr ether_spec;
 __u8 hdata[52];
};
# 1363 "../include/uapi/linux/ethtool.h"
struct ethtool_flow_ext {
 __u8 padding[2];
 unsigned char h_dest[6];
 __be16 vlan_etype;
 __be16 vlan_tci;
 __be32 data[2];
};
# 1388 "../include/uapi/linux/ethtool.h"
struct ethtool_rx_flow_spec {
 __u32 flow_type;
 union ethtool_flow_union h_u;
 struct ethtool_flow_ext h_ext;
 union ethtool_flow_union m_u;
 struct ethtool_flow_ext m_ext;
 __u64 ring_cookie;
 __u32 location;
};
# 1412 "../include/uapi/linux/ethtool.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u64 ethtool_get_flow_spec_ring(__u64 ring_cookie)
{
 return 0x00000000FFFFFFFFLL & ring_cookie;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
{
 return (0x000000FF00000000LL & ring_cookie) >>
    32;
}
# 1484 "../include/uapi/linux/ethtool.h"
struct ethtool_rxnfc {
 __u32 cmd;
 __u32 flow_type;
 __u64 data;
 struct ethtool_rx_flow_spec fs;
 union {
  __u32 rule_cnt;
  __u32 rss_context;
 };
 __u32 rule_locs[];
};
# 1510 "../include/uapi/linux/ethtool.h"
struct ethtool_rxfh_indir {
 __u32 cmd;
 __u32 size;
 __u32 ring_index[];
};
# 1548 "../include/uapi/linux/ethtool.h"
struct ethtool_rxfh {
 __u32 cmd;
 __u32 rss_context;
 __u32 indir_size;
 __u32 key_size;
 __u8 hfunc;
 __u8 input_xfrm;
 __u8 rsvd8[2];
 __u32 rsvd32;
 __u32 rss_config[];
};
# 1578 "../include/uapi/linux/ethtool.h"
struct ethtool_rx_ntuple_flow_spec {
 __u32 flow_type;
 union {
  struct ethtool_tcpip4_spec tcp_ip4_spec;
  struct ethtool_tcpip4_spec udp_ip4_spec;
  struct ethtool_tcpip4_spec sctp_ip4_spec;
  struct ethtool_ah_espip4_spec ah_ip4_spec;
  struct ethtool_ah_espip4_spec esp_ip4_spec;
  struct ethtool_usrip4_spec usr_ip4_spec;
  struct ethhdr ether_spec;
  __u8 hdata[72];
 } h_u, m_u;

 __u16 vlan_tag;
 __u16 vlan_tag_mask;
 __u64 data;
 __u64 data_mask;

 __s32 action;


};






struct ethtool_rx_ntuple {
 __u32 cmd;
 struct ethtool_rx_ntuple_flow_spec fs;
};


enum ethtool_flash_op_type {
 ETHTOOL_FLASH_ALL_REGIONS = 0,
};


struct ethtool_flash {
 __u32 cmd;
 __u32 region;
 char data[128];
};
# 1637 "../include/uapi/linux/ethtool.h"
struct ethtool_dump {
 __u32 cmd;
 __u32 version;
 __u32 flag;
 __u32 len;
 __u8 data[];
};
# 1656 "../include/uapi/linux/ethtool.h"
struct ethtool_get_features_block {
 __u32 available;
 __u32 requested;
 __u32 active;
 __u32 never_changed;
};
# 1671 "../include/uapi/linux/ethtool.h"
struct ethtool_gfeatures {
 __u32 cmd;
 __u32 size;
 struct ethtool_get_features_block features[];
};






struct ethtool_set_features_block {
 __u32 valid;
 __u32 requested;
};







struct ethtool_sfeatures {
 __u32 cmd;
 __u32 size;
 struct ethtool_set_features_block features[];
};
# 1719 "../include/uapi/linux/ethtool.h"
struct ethtool_ts_info {
 __u32 cmd;
 __u32 so_timestamping;
 __s32 phc_index;
 __u32 tx_types;
 __u32 tx_reserved[3];
 __u32 rx_filters;
 __u32 rx_reserved[3];
};
# 1754 "../include/uapi/linux/ethtool.h"
enum ethtool_sfeatures_retval_bits {
 ETHTOOL_F_UNSUPPORTED__BIT,
 ETHTOOL_F_WISH__BIT,
 ETHTOOL_F_COMPAT__BIT,
};
# 1773 "../include/uapi/linux/ethtool.h"
struct ethtool_per_queue_op {
 __u32 cmd;
 __u32 sub_command;
 __u32 queue_mask[(((4096) + (32) - 1) / (32))];
 char data[];
};
# 1809 "../include/uapi/linux/ethtool.h"
struct ethtool_fecparam {
 __u32 cmd;

 __u32 active_fec;
 __u32 fec;
 __u32 reserved;
};
# 1830 "../include/uapi/linux/ethtool.h"
enum ethtool_fec_config_bits {
 ETHTOOL_FEC_NONE_BIT,
 ETHTOOL_FEC_AUTO_BIT,
 ETHTOOL_FEC_OFF_BIT,
 ETHTOOL_FEC_RS_BIT,
 ETHTOOL_FEC_BASER_BIT,
 ETHTOOL_FEC_LLRS_BIT,
};
# 1946 "../include/uapi/linux/ethtool.h"
enum ethtool_link_mode_bit_indices {
 ETHTOOL_LINK_MODE_10baseT_Half_BIT = 0,
 ETHTOOL_LINK_MODE_10baseT_Full_BIT = 1,
 ETHTOOL_LINK_MODE_100baseT_Half_BIT = 2,
 ETHTOOL_LINK_MODE_100baseT_Full_BIT = 3,
 ETHTOOL_LINK_MODE_1000baseT_Half_BIT = 4,
 ETHTOOL_LINK_MODE_1000baseT_Full_BIT = 5,
 ETHTOOL_LINK_MODE_Autoneg_BIT = 6,
 ETHTOOL_LINK_MODE_TP_BIT = 7,
 ETHTOOL_LINK_MODE_AUI_BIT = 8,
 ETHTOOL_LINK_MODE_MII_BIT = 9,
 ETHTOOL_LINK_MODE_FIBRE_BIT = 10,
 ETHTOOL_LINK_MODE_BNC_BIT = 11,
 ETHTOOL_LINK_MODE_10000baseT_Full_BIT = 12,
 ETHTOOL_LINK_MODE_Pause_BIT = 13,
 ETHTOOL_LINK_MODE_Asym_Pause_BIT = 14,
 ETHTOOL_LINK_MODE_2500baseX_Full_BIT = 15,
 ETHTOOL_LINK_MODE_Backplane_BIT = 16,
 ETHTOOL_LINK_MODE_1000baseKX_Full_BIT = 17,
 ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT = 18,
 ETHTOOL_LINK_MODE_10000baseKR_Full_BIT = 19,
 ETHTOOL_LINK_MODE_10000baseR_FEC_BIT = 20,
 ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT = 21,
 ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT = 22,
 ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT = 23,
 ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT = 24,
 ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT = 25,
 ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT = 26,
 ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT = 27,
 ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT = 28,
 ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT = 29,
 ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT = 30,
 ETHTOOL_LINK_MODE_25000baseCR_Full_BIT = 31,







 ETHTOOL_LINK_MODE_25000baseKR_Full_BIT = 32,
 ETHTOOL_LINK_MODE_25000baseSR_Full_BIT = 33,
 ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT = 34,
 ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT = 35,
 ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT = 36,
 ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT = 37,
 ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT = 38,
 ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT = 39,
 ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT = 40,
 ETHTOOL_LINK_MODE_1000baseX_Full_BIT = 41,
 ETHTOOL_LINK_MODE_10000baseCR_Full_BIT = 42,
 ETHTOOL_LINK_MODE_10000baseSR_Full_BIT = 43,
 ETHTOOL_LINK_MODE_10000baseLR_Full_BIT = 44,
 ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT = 45,
 ETHTOOL_LINK_MODE_10000baseER_Full_BIT = 46,
 ETHTOOL_LINK_MODE_2500baseT_Full_BIT = 47,
 ETHTOOL_LINK_MODE_5000baseT_Full_BIT = 48,

 ETHTOOL_LINK_MODE_FEC_NONE_BIT = 49,
 ETHTOOL_LINK_MODE_FEC_RS_BIT = 50,
 ETHTOOL_LINK_MODE_FEC_BASER_BIT = 51,
 ETHTOOL_LINK_MODE_50000baseKR_Full_BIT = 52,
 ETHTOOL_LINK_MODE_50000baseSR_Full_BIT = 53,
 ETHTOOL_LINK_MODE_50000baseCR_Full_BIT = 54,
 ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT = 55,
 ETHTOOL_LINK_MODE_50000baseDR_Full_BIT = 56,
 ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT = 57,
 ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT = 58,
 ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT = 59,
 ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT = 60,
 ETHTOOL_LINK_MODE_100000baseDR2_Full_BIT = 61,
 ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT = 62,
 ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT = 63,
 ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT = 64,
 ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT = 65,
 ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT = 66,
 ETHTOOL_LINK_MODE_100baseT1_Full_BIT = 67,
 ETHTOOL_LINK_MODE_1000baseT1_Full_BIT = 68,
 ETHTOOL_LINK_MODE_400000baseKR8_Full_BIT = 69,
 ETHTOOL_LINK_MODE_400000baseSR8_Full_BIT = 70,
 ETHTOOL_LINK_MODE_400000baseLR8_ER8_FR8_Full_BIT = 71,
 ETHTOOL_LINK_MODE_400000baseDR8_Full_BIT = 72,
 ETHTOOL_LINK_MODE_400000baseCR8_Full_BIT = 73,
 ETHTOOL_LINK_MODE_FEC_LLRS_BIT = 74,
 ETHTOOL_LINK_MODE_100000baseKR_Full_BIT = 75,
 ETHTOOL_LINK_MODE_100000baseSR_Full_BIT = 76,
 ETHTOOL_LINK_MODE_100000baseLR_ER_FR_Full_BIT = 77,
 ETHTOOL_LINK_MODE_100000baseCR_Full_BIT = 78,
 ETHTOOL_LINK_MODE_100000baseDR_Full_BIT = 79,
 ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT = 80,
 ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT = 81,
 ETHTOOL_LINK_MODE_200000baseLR2_ER2_FR2_Full_BIT = 82,
 ETHTOOL_LINK_MODE_200000baseDR2_Full_BIT = 83,
 ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT = 84,
 ETHTOOL_LINK_MODE_400000baseKR4_Full_BIT = 85,
 ETHTOOL_LINK_MODE_400000baseSR4_Full_BIT = 86,
 ETHTOOL_LINK_MODE_400000baseLR4_ER4_FR4_Full_BIT = 87,
 ETHTOOL_LINK_MODE_400000baseDR4_Full_BIT = 88,
 ETHTOOL_LINK_MODE_400000baseCR4_Full_BIT = 89,
 ETHTOOL_LINK_MODE_100baseFX_Half_BIT = 90,
 ETHTOOL_LINK_MODE_100baseFX_Full_BIT = 91,
 ETHTOOL_LINK_MODE_10baseT1L_Full_BIT = 92,
 ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT = 93,
 ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT = 94,
 ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT = 95,
 ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT = 96,
 ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT = 97,
 ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT = 98,
 ETHTOOL_LINK_MODE_10baseT1S_Full_BIT = 99,
 ETHTOOL_LINK_MODE_10baseT1S_Half_BIT = 100,
 ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT = 101,
 ETHTOOL_LINK_MODE_10baseT1BRR_Full_BIT = 102,


 __ETHTOOL_LINK_MODE_MASK_NBITS
};
# 2174 "../include/uapi/linux/ethtool.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ethtool_validate_speed(__u32 speed)
{
 return speed <= ((int)(~0U >> 1)) || speed == (__u32)-1;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ethtool_validate_duplex(__u8 duplex)
{
 switch (duplex) {
 case 0x00:
 case 0x01:
 case 0xff:
  return 1;
 }

 return 0;
}
# 2384 "../include/uapi/linux/ethtool.h"
enum ethtool_reset_flags {





 ETH_RESET_MGMT = 1 << 0,
 ETH_RESET_IRQ = 1 << 1,
 ETH_RESET_DMA = 1 << 2,
 ETH_RESET_FILTER = 1 << 3,
 ETH_RESET_OFFLOAD = 1 << 4,
 ETH_RESET_MAC = 1 << 5,
 ETH_RESET_PHY = 1 << 6,
 ETH_RESET_RAM = 1 << 7,

 ETH_RESET_AP = 1 << 8,

 ETH_RESET_DEDICATED = 0x0000ffff,

 ETH_RESET_ALL = 0xffffffff,

};
# 2513 "../include/uapi/linux/ethtool.h"
struct ethtool_link_settings {
 __u32 cmd;
 __u32 speed;
 __u8 duplex;
 __u8 port;
 __u8 phy_address;
 __u8 autoneg;
 __u8 mdio_support;
 __u8 eth_tp_mdix;
 __u8 eth_tp_mdix_ctrl;
 __s8 link_mode_masks_nwords;
 __u8 transceiver;
 __u8 master_slave_cfg;
 __u8 master_slave_state;
 __u8 rate_matching;
 __u32 reserved[7];
 __u32 link_mode_masks[];





};
# 21 "../include/linux/ethtool.h" 2
# 1 "../include/uapi/linux/net_tstamp.h" 1
# 17 "../include/uapi/linux/net_tstamp.h"
enum {
 SOF_TIMESTAMPING_TX_HARDWARE = (1<<0),
 SOF_TIMESTAMPING_TX_SOFTWARE = (1<<1),
 SOF_TIMESTAMPING_RX_HARDWARE = (1<<2),
 SOF_TIMESTAMPING_RX_SOFTWARE = (1<<3),
 SOF_TIMESTAMPING_SOFTWARE = (1<<4),
 SOF_TIMESTAMPING_SYS_HARDWARE = (1<<5),
 SOF_TIMESTAMPING_RAW_HARDWARE = (1<<6),
 SOF_TIMESTAMPING_OPT_ID = (1<<7),
 SOF_TIMESTAMPING_TX_SCHED = (1<<8),
 SOF_TIMESTAMPING_TX_ACK = (1<<9),
 SOF_TIMESTAMPING_OPT_CMSG = (1<<10),
 SOF_TIMESTAMPING_OPT_TSONLY = (1<<11),
 SOF_TIMESTAMPING_OPT_STATS = (1<<12),
 SOF_TIMESTAMPING_OPT_PKTINFO = (1<<13),
 SOF_TIMESTAMPING_OPT_TX_SWHW = (1<<14),
 SOF_TIMESTAMPING_BIND_PHC = (1 << 15),
 SOF_TIMESTAMPING_OPT_ID_TCP = (1 << 16),

 SOF_TIMESTAMPING_LAST = SOF_TIMESTAMPING_OPT_ID_TCP,
 SOF_TIMESTAMPING_MASK = (SOF_TIMESTAMPING_LAST - 1) |
     SOF_TIMESTAMPING_LAST
};
# 58 "../include/uapi/linux/net_tstamp.h"
struct so_timestamping {
 int flags;
 int bind_phc;
};
# 76 "../include/uapi/linux/net_tstamp.h"
struct hwtstamp_config {
 int flags;
 int tx_type;
 int rx_filter;
};


enum hwtstamp_flags {






 HWTSTAMP_FLAG_BONDED_PHC_INDEX = (1<<0),


 HWTSTAMP_FLAG_LAST = HWTSTAMP_FLAG_BONDED_PHC_INDEX,
 HWTSTAMP_FLAG_MASK = (HWTSTAMP_FLAG_LAST - 1) | HWTSTAMP_FLAG_LAST
};


enum hwtstamp_tx_types {





 HWTSTAMP_TX_OFF,







 HWTSTAMP_TX_ON,
# 121 "../include/uapi/linux/net_tstamp.h"
 HWTSTAMP_TX_ONESTEP_SYNC,







 HWTSTAMP_TX_ONESTEP_P2P,


 __HWTSTAMP_TX_CNT
};


enum hwtstamp_rx_filters {

 HWTSTAMP_FILTER_NONE,


 HWTSTAMP_FILTER_ALL,


 HWTSTAMP_FILTER_SOME,


 HWTSTAMP_FILTER_PTP_V1_L4_EVENT,

 HWTSTAMP_FILTER_PTP_V1_L4_SYNC,

 HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ,

 HWTSTAMP_FILTER_PTP_V2_L4_EVENT,

 HWTSTAMP_FILTER_PTP_V2_L4_SYNC,

 HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ,


 HWTSTAMP_FILTER_PTP_V2_L2_EVENT,

 HWTSTAMP_FILTER_PTP_V2_L2_SYNC,

 HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ,


 HWTSTAMP_FILTER_PTP_V2_EVENT,

 HWTSTAMP_FILTER_PTP_V2_SYNC,

 HWTSTAMP_FILTER_PTP_V2_DELAY_REQ,


 HWTSTAMP_FILTER_NTP_ALL,


 __HWTSTAMP_FILTER_CNT
};


struct scm_ts_pktinfo {
 __u32 if_index;
 __u32 pkt_length;
 __u32 reserved[2];
};





enum txtime_flags {
 SOF_TXTIME_DEADLINE_MODE = (1 << 0),
 SOF_TXTIME_REPORT_ERRORS = (1 << 1),

 SOF_TXTIME_FLAGS_LAST = SOF_TXTIME_REPORT_ERRORS,
 SOF_TXTIME_FLAGS_MASK = (SOF_TXTIME_FLAGS_LAST - 1) |
     SOF_TXTIME_FLAGS_LAST
};

struct sock_txtime {
 __kernel_clockid_t clockid;
 __u32 flags;
};
# 22 "../include/linux/ethtool.h" 2

struct compat_ethtool_rx_flow_spec {
 u32 flow_type;
 union ethtool_flow_union h_u;
 struct ethtool_flow_ext h_ext;
 union ethtool_flow_union m_u;
 struct ethtool_flow_ext m_ext;
 compat_u64 ring_cookie;
 u32 location;
};

struct compat_ethtool_rxnfc {
 u32 cmd;
 u32 flow_type;
 compat_u64 data;
 struct compat_ethtool_rx_flow_spec fs;
 u32 rule_cnt;
 u32 rule_locs[];
};
# 53 "../include/linux/ethtool.h"
enum ethtool_phys_id_state {
 ETHTOOL_ID_INACTIVE,
 ETHTOOL_ID_ACTIVE,
 ETHTOOL_ID_ON,
 ETHTOOL_ID_OFF
};

enum {
 ETH_RSS_HASH_TOP_BIT,
 ETH_RSS_HASH_XOR_BIT,
 ETH_RSS_HASH_CRC32_BIT,





 ETH_RSS_HASH_FUNCS_COUNT
};
# 82 "../include/linux/ethtool.h"
struct kernel_ethtool_ringparam {
 u32 rx_buf_len;
 u8 tcp_data_split;
 u8 tx_push;
 u8 rx_push;
 u32 cqe_size;
 u32 tx_push_buf_len;
 u32 tx_push_buf_max_len;
};
# 101 "../include/linux/ethtool.h"
enum ethtool_supported_ring_param {
 ETHTOOL_RING_USE_RX_BUF_LEN = ((((1UL))) << (0)),
 ETHTOOL_RING_USE_CQE_SIZE = ((((1UL))) << (1)),
 ETHTOOL_RING_USE_TX_PUSH = ((((1UL))) << (2)),
 ETHTOOL_RING_USE_RX_PUSH = ((((1UL))) << (3)),
 ETHTOOL_RING_USE_TX_PUSH_BUF_LEN = ((((1UL))) << (4)),
 ETHTOOL_RING_USE_TCP_DATA_SPLIT = ((((1UL))) << (5)),
};
# 120 "../include/linux/ethtool.h"
struct net_device;
struct netlink_ext_ack;


struct ethtool_link_ext_state_info {
 enum ethtool_link_ext_state link_ext_state;
 union {
  enum ethtool_link_ext_substate_autoneg autoneg;
  enum ethtool_link_ext_substate_link_training link_training;
  enum ethtool_link_ext_substate_link_logical_mismatch link_logical_mismatch;
  enum ethtool_link_ext_substate_bad_signal_integrity bad_signal_integrity;
  enum ethtool_link_ext_substate_cable_issue cable_issue;
  enum ethtool_link_ext_substate_module module;
  u32 __link_ext_substate;
 };
};

struct ethtool_link_ext_stats {
# 148 "../include/linux/ethtool.h"
 u64 link_down_events;
};
# 158 "../include/linux/ethtool.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ethtool_rxfh_indir_default(u32 index, u32 n_rx_rings)
{
 return index % n_rx_rings;
}
# 174 "../include/linux/ethtool.h"
struct ethtool_rxfh_context {
 u32 indir_size;
 u32 key_size;
 u16 priv_size;
 u8 hfunc;
 u8 input_xfrm;
 u8 indir_configured:1;
 u8 key_configured:1;



 u32 key_off;
 u8 data[] __attribute__((__aligned__(sizeof(void *))));
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ethtool_rxfh_context_priv(struct ethtool_rxfh_context *ctx)
{
 return ctx->data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 *ethtool_rxfh_context_indir(struct ethtool_rxfh_context *ctx)
{
 return (u32 *)(ctx->data + ((((ctx->priv_size)) + ((__typeof__((ctx->priv_size)))((sizeof(u32))) - 1)) & ~((__typeof__((ctx->priv_size)))((sizeof(u32))) - 1)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 *ethtool_rxfh_context_key(struct ethtool_rxfh_context *ctx)
{
 return &ctx->data[ctx->key_off];
}

void ethtool_rxfh_context_lost(struct net_device *dev, u32 context_id);
# 213 "../include/linux/ethtool.h"
struct ethtool_link_ksettings {
 struct ethtool_link_settings base;
 struct {
  unsigned long supported[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
  unsigned long advertising[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
  unsigned long lp_advertising[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 } link_modes;
 u32 lanes;
};
# 265 "../include/linux/ethtool.h"
extern int
__ethtool_get_link_ksettings(struct net_device *dev,
        struct ethtool_link_ksettings *link_ksettings);

struct ethtool_keee {
 unsigned long supported[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 unsigned long advertised[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 unsigned long lp_advertised[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 u32 tx_lpi_timer;
 bool tx_lpi_enabled;
 bool eee_active;
 bool eee_enabled;
};

struct kernel_ethtool_coalesce {
 u8 use_cqe_mode_tx;
 u8 use_cqe_mode_rx;
 u32 tx_aggr_max_bytes;
 u32 tx_aggr_max_frames;
 u32 tx_aggr_time_usecs;
};
# 294 "../include/linux/ethtool.h"
void ethtool_intersect_link_masks(struct ethtool_link_ksettings *dst,
      struct ethtool_link_ksettings *src);

void ethtool_convert_legacy_u32_to_link_mode(unsigned long *dst,
          u32 legacy_u32);


bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32,
         const unsigned long *src);
# 368 "../include/linux/ethtool.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ethtool_stats_init(u64 *stats, unsigned int n)
{
 while (n--)
  stats[n] = (~0ULL);
}




struct ethtool_eth_mac_stats {
 enum ethtool_mac_stats_src src;
 union { struct { u64 FramesTransmittedOK; u64 SingleCollisionFrames; u64 MultipleCollisionFrames; u64 FramesReceivedOK; u64 FrameCheckSequenceErrors; u64 AlignmentErrors; u64 OctetsTransmittedOK; u64 FramesWithDeferredXmissions; u64 LateCollisions; u64 FramesAbortedDueToXSColls; u64 FramesLostDueToIntMACXmitError; u64 CarrierSenseErrors; u64 OctetsReceivedOK; u64 FramesLostDueToIntMACRcvError; u64 MulticastFramesXmittedOK; u64 BroadcastFramesXmittedOK; u64 FramesWithExcessiveDeferral; u64 MulticastFramesReceivedOK; u64 BroadcastFramesReceivedOK; u64 InRangeLengthErrors; u64 OutOfRangeLengthField; u64 FrameTooLongErrors; } ; struct { u64 FramesTransmittedOK; u64 SingleCollisionFrames; u64 MultipleCollisionFrames; u64 FramesReceivedOK; u64 FrameCheckSequenceErrors; u64 AlignmentErrors; u64 OctetsTransmittedOK; u64 FramesWithDeferredXmissions; u64 LateCollisions; u64 FramesAbortedDueToXSColls; u64 FramesLostDueToIntMACXmitError; u64 CarrierSenseErrors; u64 OctetsReceivedOK; u64 FramesLostDueToIntMACRcvError; u64 MulticastFramesXmittedOK; u64 BroadcastFramesXmittedOK; u64 FramesWithExcessiveDeferral; u64 MulticastFramesReceivedOK; u64 BroadcastFramesReceivedOK; u64 InRangeLengthErrors; u64 OutOfRangeLengthField; u64 FrameTooLongErrors; } stats; } ;
# 403 "../include/linux/ethtool.h"
};




struct ethtool_eth_phy_stats {
 enum ethtool_mac_stats_src src;
 union { struct { u64 SymbolErrorDuringCarrier; } ; struct { u64 SymbolErrorDuringCarrier; } stats; } ;


};




struct ethtool_eth_ctrl_stats {
 enum ethtool_mac_stats_src src;
 union { struct { u64 MACControlFramesTransmitted; u64 MACControlFramesReceived; u64 UnsupportedOpcodesReceived; } ; struct { u64 MACControlFramesTransmitted; u64 MACControlFramesReceived; u64 UnsupportedOpcodesReceived; } stats; } ;




};
# 443 "../include/linux/ethtool.h"
struct ethtool_pause_stats {
 enum ethtool_mac_stats_src src;
 union { struct { u64 tx_pause_frames; u64 rx_pause_frames; } ; struct { u64 tx_pause_frames; u64 rx_pause_frames; } stats; } ;



};
# 479 "../include/linux/ethtool.h"
struct ethtool_fec_stats {
 struct ethtool_fec_stat {
  u64 total;
  u64 lanes[8];
 } corrected_blocks, uncorrectable_blocks, corrected_bits;
};






struct ethtool_rmon_hist_range {
 u16 low;
 u16 high;
};
# 516 "../include/linux/ethtool.h"
struct ethtool_rmon_stats {
 enum ethtool_mac_stats_src src;
 union { struct { u64 undersize_pkts; u64 oversize_pkts; u64 fragments; u64 jabbers; u64 hist[10]; u64 hist_tx[10]; } ; struct { u64 undersize_pkts; u64 oversize_pkts; u64 fragments; u64 jabbers; u64 hist[10]; u64 hist_tx[10]; } stats; } ;
# 527 "../include/linux/ethtool.h"
};
# 541 "../include/linux/ethtool.h"
struct ethtool_ts_stats {
 union { struct { u64 pkts; u64 lost; u64 err; } ; struct { u64 pkts; u64 lost; u64 err; } tx_stats; } ;




};
# 564 "../include/linux/ethtool.h"
struct ethtool_module_eeprom {
 u32 offset;
 u32 length;
 u8 page;
 u8 bank;
 u8 i2c_address;
 u8 *data;
};







struct ethtool_module_power_mode_params {
 enum ethtool_module_power_mode_policy policy;
 enum ethtool_module_power_mode mode;
};
# 623 "../include/linux/ethtool.h"
struct ethtool_mm_state {
 u32 verify_time;
 u32 max_verify_time;
 enum ethtool_mm_verify_status verify_status;
 bool tx_enabled;
 bool tx_active;
 bool pmac_enabled;
 bool verify_enabled;
 u32 tx_min_frag_size;
 u32 rx_min_frag_size;
};
# 643 "../include/linux/ethtool.h"
struct ethtool_mm_cfg {
 u32 verify_time;
 bool verify_enabled;
 bool tx_enabled;
 bool pmac_enabled;
 u32 tx_min_frag_size;
};
# 667 "../include/linux/ethtool.h"
struct ethtool_mm_stats {
 u64 MACMergeFrameAssErrorCount;
 u64 MACMergeFrameSmdErrorCount;
 u64 MACMergeFrameAssOkCount;
 u64 MACMergeFragCountRx;
 u64 MACMergeFragCountTx;
 u64 MACMergeHoldCount;
};
# 698 "../include/linux/ethtool.h"
struct ethtool_rxfh_param {
 u8 hfunc;
 u32 indir_size;
 u32 *indir;
 u32 key_size;
 u8 *key;
 u32 rss_context;
 u8 rss_delete;
 u8 input_xfrm;
};
# 717 "../include/linux/ethtool.h"
struct kernel_ethtool_ts_info {
 u32 cmd;
 u32 so_timestamping;
 int phc_index;
 enum hwtstamp_tx_types tx_types;
 enum hwtstamp_rx_filters rx_filters;
};
# 950 "../include/linux/ethtool.h"
struct ethtool_ops {
 u32 cap_link_lanes_supported:1;
 u32 cap_rss_ctx_supported:1;
 u32 cap_rss_sym_xor_supported:1;
 u32 rxfh_indir_space;
 u16 rxfh_key_space;
 u16 rxfh_priv_size;
 u32 rxfh_max_context_id;
 u32 supported_coalesce_params;
 u32 supported_ring_params;
 void (*get_drvinfo)(struct net_device *, struct ethtool_drvinfo *);
 int (*get_regs_len)(struct net_device *);
 void (*get_regs)(struct net_device *, struct ethtool_regs *, void *);
 void (*get_wol)(struct net_device *, struct ethtool_wolinfo *);
 int (*set_wol)(struct net_device *, struct ethtool_wolinfo *);
 u32 (*get_msglevel)(struct net_device *);
 void (*set_msglevel)(struct net_device *, u32);
 int (*nway_reset)(struct net_device *);
 u32 (*get_link)(struct net_device *);
 int (*get_link_ext_state)(struct net_device *,
          struct ethtool_link_ext_state_info *);
 void (*get_link_ext_stats)(struct net_device *dev,
          struct ethtool_link_ext_stats *stats);
 int (*get_eeprom_len)(struct net_device *);
 int (*get_eeprom)(struct net_device *,
         struct ethtool_eeprom *, u8 *);
 int (*set_eeprom)(struct net_device *,
         struct ethtool_eeprom *, u8 *);
 int (*get_coalesce)(struct net_device *,
    struct ethtool_coalesce *,
    struct kernel_ethtool_coalesce *,
    struct netlink_ext_ack *);
 int (*set_coalesce)(struct net_device *,
    struct ethtool_coalesce *,
    struct kernel_ethtool_coalesce *,
    struct netlink_ext_ack *);
 void (*get_ringparam)(struct net_device *,
     struct ethtool_ringparam *,
     struct kernel_ethtool_ringparam *,
     struct netlink_ext_ack *);
 int (*set_ringparam)(struct net_device *,
     struct ethtool_ringparam *,
     struct kernel_ethtool_ringparam *,
     struct netlink_ext_ack *);
 void (*get_pause_stats)(struct net_device *dev,
       struct ethtool_pause_stats *pause_stats);
 void (*get_pauseparam)(struct net_device *,
      struct ethtool_pauseparam*);
 int (*set_pauseparam)(struct net_device *,
      struct ethtool_pauseparam*);
 void (*self_test)(struct net_device *, struct ethtool_test *, u64 *);
 void (*get_strings)(struct net_device *, u32 stringset, u8 *);
 int (*set_phys_id)(struct net_device *, enum ethtool_phys_id_state);
 void (*get_ethtool_stats)(struct net_device *,
         struct ethtool_stats *, u64 *);
 int (*begin)(struct net_device *);
 void (*complete)(struct net_device *);
 u32 (*get_priv_flags)(struct net_device *);
 int (*set_priv_flags)(struct net_device *, u32);
 int (*get_sset_count)(struct net_device *, int);
 int (*get_rxnfc)(struct net_device *,
        struct ethtool_rxnfc *, u32 *rule_locs);
 int (*set_rxnfc)(struct net_device *, struct ethtool_rxnfc *);
 int (*flash_device)(struct net_device *, struct ethtool_flash *);
 int (*reset)(struct net_device *, u32 *);
 u32 (*get_rxfh_key_size)(struct net_device *);
 u32 (*get_rxfh_indir_size)(struct net_device *);
 int (*get_rxfh)(struct net_device *, struct ethtool_rxfh_param *);
 int (*set_rxfh)(struct net_device *, struct ethtool_rxfh_param *,
       struct netlink_ext_ack *extack);
 int (*create_rxfh_context)(struct net_device *,
           struct ethtool_rxfh_context *ctx,
           const struct ethtool_rxfh_param *rxfh,
           struct netlink_ext_ack *extack);
 int (*modify_rxfh_context)(struct net_device *,
           struct ethtool_rxfh_context *ctx,
           const struct ethtool_rxfh_param *rxfh,
           struct netlink_ext_ack *extack);
 int (*remove_rxfh_context)(struct net_device *,
           struct ethtool_rxfh_context *ctx,
           u32 rss_context,
           struct netlink_ext_ack *extack);
 void (*get_channels)(struct net_device *, struct ethtool_channels *);
 int (*set_channels)(struct net_device *, struct ethtool_channels *);
 int (*get_dump_flag)(struct net_device *, struct ethtool_dump *);
 int (*get_dump_data)(struct net_device *,
     struct ethtool_dump *, void *);
 int (*set_dump)(struct net_device *, struct ethtool_dump *);
 int (*get_ts_info)(struct net_device *, struct kernel_ethtool_ts_info *);
 void (*get_ts_stats)(struct net_device *dev,
    struct ethtool_ts_stats *ts_stats);
 int (*get_module_info)(struct net_device *,
       struct ethtool_modinfo *);
 int (*get_module_eeprom)(struct net_device *,
         struct ethtool_eeprom *, u8 *);
 int (*get_eee)(struct net_device *dev, struct ethtool_keee *eee);
 int (*set_eee)(struct net_device *dev, struct ethtool_keee *eee);
 int (*get_tunable)(struct net_device *,
          const struct ethtool_tunable *, void *);
 int (*set_tunable)(struct net_device *,
          const struct ethtool_tunable *, const void *);
 int (*get_per_queue_coalesce)(struct net_device *, u32,
       struct ethtool_coalesce *);
 int (*set_per_queue_coalesce)(struct net_device *, u32,
       struct ethtool_coalesce *);
 int (*get_link_ksettings)(struct net_device *,
          struct ethtool_link_ksettings *);
 int (*set_link_ksettings)(struct net_device *,
          const struct ethtool_link_ksettings *);
 void (*get_fec_stats)(struct net_device *dev,
     struct ethtool_fec_stats *fec_stats);
 int (*get_fecparam)(struct net_device *,
          struct ethtool_fecparam *);
 int (*set_fecparam)(struct net_device *,
          struct ethtool_fecparam *);
 void (*get_ethtool_phy_stats)(struct net_device *,
      struct ethtool_stats *, u64 *);
 int (*get_phy_tunable)(struct net_device *,
       const struct ethtool_tunable *, void *);
 int (*set_phy_tunable)(struct net_device *,
       const struct ethtool_tunable *, const void *);
 int (*get_module_eeprom_by_page)(struct net_device *dev,
          const struct ethtool_module_eeprom *page,
          struct netlink_ext_ack *extack);
 int (*set_module_eeprom_by_page)(struct net_device *dev,
          const struct ethtool_module_eeprom *page,
          struct netlink_ext_ack *extack);
 void (*get_eth_phy_stats)(struct net_device *dev,
         struct ethtool_eth_phy_stats *phy_stats);
 void (*get_eth_mac_stats)(struct net_device *dev,
         struct ethtool_eth_mac_stats *mac_stats);
 void (*get_eth_ctrl_stats)(struct net_device *dev,
          struct ethtool_eth_ctrl_stats *ctrl_stats);
 void (*get_rmon_stats)(struct net_device *dev,
      struct ethtool_rmon_stats *rmon_stats,
      const struct ethtool_rmon_hist_range **ranges);
 int (*get_module_power_mode)(struct net_device *dev,
      struct ethtool_module_power_mode_params *params,
      struct netlink_ext_ack *extack);
 int (*set_module_power_mode)(struct net_device *dev,
      const struct ethtool_module_power_mode_params *params,
      struct netlink_ext_ack *extack);
 int (*get_mm)(struct net_device *dev, struct ethtool_mm_state *state);
 int (*set_mm)(struct net_device *dev, struct ethtool_mm_cfg *cfg,
     struct netlink_ext_ack *extack);
 void (*get_mm_stats)(struct net_device *dev, struct ethtool_mm_stats *stats);
};

int ethtool_check_ops(const struct ethtool_ops *ops);

struct ethtool_rx_flow_rule {
 struct flow_rule *rule;
 unsigned long priv[];
};

struct ethtool_rx_flow_spec_input {
 const struct ethtool_rx_flow_spec *fs;
 u32 rss_ctx;
};

struct ethtool_rx_flow_rule *
ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input);
void ethtool_rx_flow_rule_destroy(struct ethtool_rx_flow_rule *rule);

bool ethtool_virtdev_validate_cmd(const struct ethtool_link_ksettings *cmd);
int ethtool_virtdev_set_link_ksettings(struct net_device *dev,
           const struct ethtool_link_ksettings *cmd,
           u32 *dev_speed, u8 *dev_duplex);
# 1127 "../include/linux/ethtool.h"
struct ethtool_netdev_state {
 struct xarray rss_ctx;
 struct mutex rss_lock;
 unsigned wol_enabled:1;
 unsigned module_fw_flash_in_progress:1;
};

struct phy_device;
struct phy_tdr_config;
struct phy_plca_cfg;
struct phy_plca_status;
# 1153 "../include/linux/ethtool.h"
struct ethtool_phy_ops {
 int (*get_sset_count)(struct phy_device *dev);
 int (*get_strings)(struct phy_device *dev, u8 *data);
 int (*get_stats)(struct phy_device *dev,
    struct ethtool_stats *stats, u64 *data);
 int (*get_plca_cfg)(struct phy_device *dev,
       struct phy_plca_cfg *plca_cfg);
 int (*set_plca_cfg)(struct phy_device *dev,
       const struct phy_plca_cfg *plca_cfg,
       struct netlink_ext_ack *extack);
 int (*get_plca_status)(struct phy_device *dev,
          struct phy_plca_status *plca_st);
 int (*start_cable_test)(struct phy_device *phydev,
    struct netlink_ext_ack *extack);
 int (*start_cable_test_tdr)(struct phy_device *phydev,
        struct netlink_ext_ack *extack,
        const struct phy_tdr_config *config);
};





void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops);






void
ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings,
         enum ethtool_link_mode_bit_indices link_mode);
# 1195 "../include/linux/ethtool.h"
int ethtool_get_phc_vclocks(struct net_device *dev, int **vclock_index);


u32 ethtool_op_get_link(struct net_device *dev);
int ethtool_op_get_ts_info(struct net_device *dev,
      struct kernel_ethtool_ts_info *eti);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ethtool_mm_frag_size_add_to_min(u32 val_add)
{
 return (60 + 4) * (1 + val_add) - 4;
}
# 1226 "../include/linux/ethtool.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ethtool_mm_frag_size_min_to_add(u32 val_min, u32 *val_add,
        struct netlink_ext_ack *extack)
{
 u32 add_frag_size;

 for (add_frag_size = 0; add_frag_size < 4; add_frag_size++) {
  if (ethtool_mm_frag_size_add_to_min(add_frag_size) == val_min) {
   *val_add = add_frag_size;
   return 0;
  }
 }

 do { static const char __msg[] = "ib_core" ": " "minFragSize required to be one of 60, 124, 188 or 252"; struct netlink_ext_ack *__extack = ((extack)); do_trace_netlink_extack(__msg); if (__extack) __extack->_msg = __msg; } while (0);

 return -22;
}







int ethtool_get_ts_info_by_layer(struct net_device *dev,
     struct kernel_ethtool_ts_info *info);
# 1260 "../include/linux/ethtool.h"
extern __attribute__((__format__(printf, 2, 3))) void ethtool_sprintf(u8 **data, const char *fmt, ...);
# 1273 "../include/linux/ethtool.h"
extern void ethtool_puts(u8 **data, const char *str);


struct ethtool_forced_speed_map {
 u32 speed;
 unsigned long caps[(((__ETHTOOL_LINK_MODE_MASK_NBITS) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];

 const u32 *cap_arr;
 u32 arr_size;
};
# 1291 "../include/linux/ethtool.h"
void
ethtool_forced_speed_maps_init(struct ethtool_forced_speed_map *maps, u32 size);


struct ethtool_c33_pse_ext_state_info {
 enum ethtool_c33_pse_ext_state c33_pse_ext_state;
 union {
  enum ethtool_c33_pse_ext_substate_error_condition error_condition;
  enum ethtool_c33_pse_ext_substate_mr_pse_enable mr_pse_enable;
  enum ethtool_c33_pse_ext_substate_option_detect_ted option_detect_ted;
  enum ethtool_c33_pse_ext_substate_option_vport_lim option_vport_lim;
  enum ethtool_c33_pse_ext_substate_ovld_detected ovld_detected;
  enum ethtool_c33_pse_ext_substate_power_not_available power_not_available;
  enum ethtool_c33_pse_ext_substate_short_detected short_detected;
  u32 __c33_pse_ext_substate;
 };
};

struct ethtool_c33_pse_pw_limit_range {
 u32 min;
 u32 max;
};
# 16 "../include/rdma/ib_verbs.h" 2







# 1 "../include/linux/irq_poll.h" 1




struct irq_poll;
typedef int (irq_poll_fn)(struct irq_poll *, int);

struct irq_poll {
 struct list_head list;
 unsigned long state;
 int weight;
 irq_poll_fn *poll;
};

enum {
 IRQ_POLL_F_SCHED = 0,
 IRQ_POLL_F_DISABLE = 1,
};

extern void irq_poll_sched(struct irq_poll *);
extern void irq_poll_init(struct irq_poll *, int, irq_poll_fn *);
extern void irq_poll_complete(struct irq_poll *);
extern void irq_poll_enable(struct irq_poll *);
extern void irq_poll_disable(struct irq_poll *);
# 24 "../include/rdma/ib_verbs.h" 2

# 1 "../include/net/ipv6.h" 1
# 12 "../include/net/ipv6.h"
# 1 "../include/linux/ipv6.h" 1




# 1 "../include/uapi/linux/ipv6.h" 1
# 22 "../include/uapi/linux/ipv6.h"
struct in6_pktinfo {
 struct in6_addr ipi6_addr;
 int ipi6_ifindex;
};



struct ip6_mtuinfo {
 struct sockaddr_in6 ip6m_addr;
 __u32 ip6m_mtu;
};


struct in6_ifreq {
 struct in6_addr ifr6_addr;
 __u32 ifr6_prefixlen;
 int ifr6_ifindex;
};
# 50 "../include/uapi/linux/ipv6.h"
struct ipv6_rt_hdr {
 __u8 nexthdr;
 __u8 hdrlen;
 __u8 type;
 __u8 segments_left;





};


struct ipv6_opt_hdr {
 __u8 nexthdr;
 __u8 hdrlen;



} __attribute__((packed));
# 81 "../include/uapi/linux/ipv6.h"
struct rt0_hdr {
 struct ipv6_rt_hdr rt_hdr;
 __u32 reserved;
 struct in6_addr addr[];


};





struct rt2_hdr {
 struct ipv6_rt_hdr rt_hdr;
 __u32 reserved;
 struct in6_addr addr;


};





struct ipv6_destopt_hao {
 __u8 type;
 __u8 length;
 struct in6_addr addr;
} __attribute__((packed));
# 118 "../include/uapi/linux/ipv6.h"
struct ipv6hdr {

 __u8 priority:4,
    version:4;






 __u8 flow_lbl[3];

 __be16 payload_len;
 __u8 nexthdr;
 __u8 hop_limit;

 union { struct { struct in6_addr saddr; struct in6_addr daddr; } ; struct { struct in6_addr saddr; struct in6_addr daddr; } addrs; } ;



};



enum {
 DEVCONF_FORWARDING = 0,
 DEVCONF_HOPLIMIT,
 DEVCONF_MTU6,
 DEVCONF_ACCEPT_RA,
 DEVCONF_ACCEPT_REDIRECTS,
 DEVCONF_AUTOCONF,
 DEVCONF_DAD_TRANSMITS,
 DEVCONF_RTR_SOLICITS,
 DEVCONF_RTR_SOLICIT_INTERVAL,
 DEVCONF_RTR_SOLICIT_DELAY,
 DEVCONF_USE_TEMPADDR,
 DEVCONF_TEMP_VALID_LFT,
 DEVCONF_TEMP_PREFERED_LFT,
 DEVCONF_REGEN_MAX_RETRY,
 DEVCONF_MAX_DESYNC_FACTOR,
 DEVCONF_MAX_ADDRESSES,
 DEVCONF_FORCE_MLD_VERSION,
 DEVCONF_ACCEPT_RA_DEFRTR,
 DEVCONF_ACCEPT_RA_PINFO,
 DEVCONF_ACCEPT_RA_RTR_PREF,
 DEVCONF_RTR_PROBE_INTERVAL,
 DEVCONF_ACCEPT_RA_RT_INFO_MAX_PLEN,
 DEVCONF_PROXY_NDP,
 DEVCONF_OPTIMISTIC_DAD,
 DEVCONF_ACCEPT_SOURCE_ROUTE,
 DEVCONF_MC_FORWARDING,
 DEVCONF_DISABLE_IPV6,
 DEVCONF_ACCEPT_DAD,
 DEVCONF_FORCE_TLLAO,
 DEVCONF_NDISC_NOTIFY,
 DEVCONF_MLDV1_UNSOLICITED_REPORT_INTERVAL,
 DEVCONF_MLDV2_UNSOLICITED_REPORT_INTERVAL,
 DEVCONF_SUPPRESS_FRAG_NDISC,
 DEVCONF_ACCEPT_RA_FROM_LOCAL,
 DEVCONF_USE_OPTIMISTIC,
 DEVCONF_ACCEPT_RA_MTU,
 DEVCONF_STABLE_SECRET,
 DEVCONF_USE_OIF_ADDRS_ONLY,
 DEVCONF_ACCEPT_RA_MIN_HOP_LIMIT,
 DEVCONF_IGNORE_ROUTES_WITH_LINKDOWN,
 DEVCONF_DROP_UNICAST_IN_L2_MULTICAST,
 DEVCONF_DROP_UNSOLICITED_NA,
 DEVCONF_KEEP_ADDR_ON_DOWN,
 DEVCONF_RTR_SOLICIT_MAX_INTERVAL,
 DEVCONF_SEG6_ENABLED,
 DEVCONF_SEG6_REQUIRE_HMAC,
 DEVCONF_ENHANCED_DAD,
 DEVCONF_ADDR_GEN_MODE,
 DEVCONF_DISABLE_POLICY,
 DEVCONF_ACCEPT_RA_RT_INFO_MIN_PLEN,
 DEVCONF_NDISC_TCLASS,
 DEVCONF_RPL_SEG_ENABLED,
 DEVCONF_RA_DEFRTR_METRIC,
 DEVCONF_IOAM6_ENABLED,
 DEVCONF_IOAM6_ID,
 DEVCONF_IOAM6_ID_WIDE,
 DEVCONF_NDISC_EVICT_NOCARRIER,
 DEVCONF_ACCEPT_UNTRACKED_NA,
 DEVCONF_ACCEPT_RA_MIN_LFT,
 DEVCONF_MAX
};
# 6 "../include/linux/ipv6.h" 2







struct ipv6_devconf {

 __u8 __cacheline_group_begin__ipv6_devconf_read_txrx[0];
 __s32 disable_ipv6;
 __s32 hop_limit;
 __s32 mtu6;
 __s32 forwarding;
 __s32 disable_policy;
 __s32 proxy_ndp;
 __u8 __cacheline_group_end__ipv6_devconf_read_txrx[0];

 __s32 accept_ra;
 __s32 accept_redirects;
 __s32 autoconf;
 __s32 dad_transmits;
 __s32 rtr_solicits;
 __s32 rtr_solicit_interval;
 __s32 rtr_solicit_max_interval;
 __s32 rtr_solicit_delay;
 __s32 force_mld_version;
 __s32 mldv1_unsolicited_report_interval;
 __s32 mldv2_unsolicited_report_interval;
 __s32 use_tempaddr;
 __s32 temp_valid_lft;
 __s32 temp_prefered_lft;
 __s32 regen_min_advance;
 __s32 regen_max_retry;
 __s32 max_desync_factor;
 __s32 max_addresses;
 __s32 accept_ra_defrtr;
 __u32 ra_defrtr_metric;
 __s32 accept_ra_min_hop_limit;
 __s32 accept_ra_min_lft;
 __s32 accept_ra_pinfo;
 __s32 ignore_routes_with_linkdown;

 __s32 accept_ra_rtr_pref;
 __s32 rtr_probe_interval;





 __s32 accept_source_route;
 __s32 accept_ra_from_local;

 __s32 optimistic_dad;
 __s32 use_optimistic;




 __s32 drop_unicast_in_l2_multicast;
 __s32 accept_dad;
 __s32 force_tllao;
 __s32 ndisc_notify;
 __s32 suppress_frag_ndisc;
 __s32 accept_ra_mtu;
 __s32 drop_unsolicited_na;
 __s32 accept_untracked_na;
 struct ipv6_stable_secret {
  bool initialized;
  struct in6_addr secret;
 } stable_secret;
 __s32 use_oif_addrs_only;
 __s32 keep_addr_on_down;
 __s32 seg6_enabled;

 __s32 seg6_require_hmac;

 __u32 enhanced_dad;
 __u32 addr_gen_mode;
 __s32 ndisc_tclass;
 __s32 rpl_seg_enabled;
 __u32 ioam6_id;
 __u32 ioam6_id_wide;
 __u8 ioam6_enabled;
 __u8 ndisc_evict_nocarrier;
 __u8 ra_honor_pio_life;

 struct ctl_table_header *sysctl_header;
};

struct ipv6_params {
 __s32 disable_ipv6;
 __s32 autoconf;
};
extern struct ipv6_params ipv6_defaults;
# 1 "../include/linux/tcp.h" 1
# 18 "../include/linux/tcp.h"
# 1 "../include/linux/win_minmax.h" 1
# 12 "../include/linux/win_minmax.h"
struct minmax_sample {
 u32 t;
 u32 v;
};


struct minmax {
 struct minmax_sample s[3];
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 minmax_get(const struct minmax *m)
{
 return m->s[0].v;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 minmax_reset(struct minmax *m, u32 t, u32 meas)
{
 struct minmax_sample val = { .t = t, .v = meas };

 m->s[2] = m->s[1] = m->s[0] = val;
 return m->s[0].v;
}

u32 minmax_running_max(struct minmax *m, u32 win, u32 t, u32 meas);
u32 minmax_running_min(struct minmax *m, u32 win, u32 t, u32 meas);
# 19 "../include/linux/tcp.h" 2
# 1 "../include/net/sock.h" 1
# 46 "../include/net/sock.h"
# 1 "../include/linux/netdevice.h" 1
# 26 "../include/linux/netdevice.h"
# 1 "../include/linux/delay.h" 1
# 25 "../include/linux/delay.h"
extern unsigned long loops_per_jiffy;

# 1 "../arch/hexagon/include/asm/delay.h" 1
# 11 "../arch/hexagon/include/asm/delay.h"
extern void __delay(unsigned long cycles);
extern void __udelay(unsigned long usecs);
# 28 "../include/linux/delay.h" 2
# 50 "../include/linux/delay.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ndelay(unsigned long x)
{
 __udelay(((((x) + (1000) - 1) / (1000))));
}



extern unsigned long lpj_fine;
void calibrate_delay(void);
unsigned long calibrate_delay_is_known(void);
void __attribute__((weak)) calibration_delay_done(void);
void msleep(unsigned int msecs);
unsigned long msleep_interruptible(unsigned int msecs);
void usleep_range_state(unsigned long min, unsigned long max,
   unsigned int state);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void usleep_range(unsigned long min, unsigned long max)
{
 usleep_range_state(min, max, 0x00000002);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void usleep_idle_range(unsigned long min, unsigned long max)
{
 usleep_range_state(min, max, (0x00000002 | 0x00000400));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ssleep(unsigned int seconds)
{
 msleep(seconds * 1000);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fsleep(unsigned long usecs)
{
 if (usecs <= 10)
  __udelay((usecs));
 else if (usecs <= 20000)
  usleep_range(usecs, 2 * usecs);
 else
  msleep((((usecs) + (1000) - 1) / (1000)));
}
# 27 "../include/linux/netdevice.h" 2

# 1 "../include/linux/prefetch.h" 1
# 18 "../include/linux/prefetch.h"
struct page;
# 50 "../include/linux/prefetch.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void prefetch_range(void *addr, size_t len)
{







}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void prefetch_page_address(struct page *page)
{



}
# 29 "../include/linux/netdevice.h" 2


# 1 "./arch/hexagon/include/generated/asm/local.h" 1
# 1 "../include/asm-generic/local.h" 1






# 1 "./arch/hexagon/include/generated/uapi/asm/types.h" 1
# 8 "../include/asm-generic/local.h" 2
# 22 "../include/asm-generic/local.h"
typedef struct
{
 atomic_long_t a;
} local_t;
# 2 "./arch/hexagon/include/generated/asm/local.h" 2
# 32 "../include/linux/netdevice.h" 2




# 1 "../include/linux/dynamic_queue_limits.h" 1
# 42 "../include/linux/dynamic_queue_limits.h"
# 1 "./arch/hexagon/include/generated/asm/bug.h" 1
# 43 "../include/linux/dynamic_queue_limits.h" 2




struct dql {

 unsigned int num_queued;
 unsigned int adj_limit;
 unsigned int last_obj_cnt;


 unsigned short stall_thrs;

 unsigned long history_head;

 unsigned long history[4];



 unsigned int limit ;
 unsigned int num_completed;

 unsigned int prev_ovlimit;
 unsigned int prev_num_queued;
 unsigned int prev_last_obj_cnt;

 unsigned int lowest_slack;
 unsigned long slack_start_time;


 unsigned int max_limit;
 unsigned int min_limit;
 unsigned int slack_hold_time;


 unsigned short stall_max;
 unsigned long last_reap;
 unsigned long stall_cnt;
};






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dql_queue_stall(struct dql *dql)
{
 unsigned long map, now, now_hi, i;

 now = jiffies;
 now_hi = now / 32;





 if (__builtin_expect(!!(now_hi != dql->history_head), 0)) {

  for (i = 0; i < 4; i++) {

   if (now_hi * 32 ==
       (dql->history_head + i) * 32)
    break;
   ((dql)->history[(dql->history_head + i + 1) % 4]) = 0;
  }

  __asm__ __volatile__("": : :"memory");
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_325(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dql->history_head) == sizeof(char) || sizeof(dql->history_head) == sizeof(short) || sizeof(dql->history_head) == sizeof(int) || sizeof(dql->history_head) == sizeof(long)) || sizeof(dql->history_head) == sizeof(long long))) __compiletime_assert_325(); } while (0); do { *(volatile typeof(dql->history_head) *)&(dql->history_head) = (now_hi); } while (0); } while (0);
 }


 map = ((dql)->history[(now_hi) % 4]);


 if (!(map & ((((1UL))) << ((now) % 32))))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_326(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((dql)->history[(now_hi) % 4])) == sizeof(char) || sizeof(((dql)->history[(now_hi) % 4])) == sizeof(short) || sizeof(((dql)->history[(now_hi) % 4])) == sizeof(int) || sizeof(((dql)->history[(now_hi) % 4])) == sizeof(long)) || sizeof(((dql)->history[(now_hi) % 4])) == sizeof(long long))) __compiletime_assert_326(); } while (0); do { *(volatile typeof(((dql)->history[(now_hi) % 4])) *)&(((dql)->history[(now_hi) % 4])) = (map | ((((1UL))) << ((now) % 32))); } while (0); } while (0);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dql_queued(struct dql *dql, unsigned int count)
{
 if (({ bool __ret_do_once = !!(count > ((~0U) / 16)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/dynamic_queue_limits.h", 127, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return;

 dql->last_obj_cnt = count;






 __asm__ __volatile__("": : :"memory");

 dql->num_queued += count;


 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_327(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dql->stall_thrs) == sizeof(char) || sizeof(dql->stall_thrs) == sizeof(short) || sizeof(dql->stall_thrs) == sizeof(int) || sizeof(dql->stall_thrs) == sizeof(long)) || sizeof(dql->stall_thrs) == sizeof(long long))) __compiletime_assert_327(); } while (0); (*(const volatile typeof( _Generic((dql->stall_thrs), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dql->stall_thrs))) *)&(dql->stall_thrs)); }))
  dql_queue_stall(dql);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dql_avail(const struct dql *dql)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_328(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dql->adj_limit) == sizeof(char) || sizeof(dql->adj_limit) == sizeof(short) || sizeof(dql->adj_limit) == sizeof(int) || sizeof(dql->adj_limit) == sizeof(long)) || sizeof(dql->adj_limit) == sizeof(long long))) __compiletime_assert_328(); } while (0); (*(const volatile typeof( _Generic((dql->adj_limit), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dql->adj_limit))) *)&(dql->adj_limit)); }) - ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_329(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dql->num_queued) == sizeof(char) || sizeof(dql->num_queued) == sizeof(short) || sizeof(dql->num_queued) == sizeof(int) || sizeof(dql->num_queued) == sizeof(long)) || sizeof(dql->num_queued) == sizeof(long long))) __compiletime_assert_329(); } while (0); (*(const volatile typeof( _Generic((dql->num_queued), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dql->num_queued))) *)&(dql->num_queued)); });
}


void dql_completed(struct dql *dql, unsigned int count);


void dql_reset(struct dql *dql);


void dql_init(struct dql *dql, unsigned int hold_time);
# 37 "../include/linux/netdevice.h" 2

# 1 "../include/net/net_namespace.h" 1
# 16 "../include/net/net_namespace.h"
# 1 "../include/net/netns/core.h" 1






struct ctl_table_header;
struct prot_inuse;
struct cpumask;

struct netns_core {

 struct ctl_table_header *sysctl_hdr;

 int sysctl_somaxconn;
 int sysctl_optmem_max;
 u8 sysctl_txrehash;


 struct prot_inuse *prot_inuse;





};
# 17 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/mib.h" 1




# 1 "../include/net/snmp.h" 1
# 18 "../include/net/snmp.h"
# 1 "../include/uapi/linux/snmp.h" 1
# 19 "../include/uapi/linux/snmp.h"
enum
{
 IPSTATS_MIB_NUM = 0,

 IPSTATS_MIB_INPKTS,
 IPSTATS_MIB_INOCTETS,
 IPSTATS_MIB_INDELIVERS,
 IPSTATS_MIB_OUTFORWDATAGRAMS,
 IPSTATS_MIB_OUTREQUESTS,
 IPSTATS_MIB_OUTOCTETS,

 IPSTATS_MIB_INHDRERRORS,
 IPSTATS_MIB_INTOOBIGERRORS,
 IPSTATS_MIB_INNOROUTES,
 IPSTATS_MIB_INADDRERRORS,
 IPSTATS_MIB_INUNKNOWNPROTOS,
 IPSTATS_MIB_INTRUNCATEDPKTS,
 IPSTATS_MIB_INDISCARDS,
 IPSTATS_MIB_OUTDISCARDS,
 IPSTATS_MIB_OUTNOROUTES,
 IPSTATS_MIB_REASMTIMEOUT,
 IPSTATS_MIB_REASMREQDS,
 IPSTATS_MIB_REASMOKS,
 IPSTATS_MIB_REASMFAILS,
 IPSTATS_MIB_FRAGOKS,
 IPSTATS_MIB_FRAGFAILS,
 IPSTATS_MIB_FRAGCREATES,
 IPSTATS_MIB_INMCASTPKTS,
 IPSTATS_MIB_OUTMCASTPKTS,
 IPSTATS_MIB_INBCASTPKTS,
 IPSTATS_MIB_OUTBCASTPKTS,
 IPSTATS_MIB_INMCASTOCTETS,
 IPSTATS_MIB_OUTMCASTOCTETS,
 IPSTATS_MIB_INBCASTOCTETS,
 IPSTATS_MIB_OUTBCASTOCTETS,
 IPSTATS_MIB_CSUMERRORS,
 IPSTATS_MIB_NOECTPKTS,
 IPSTATS_MIB_ECT1PKTS,
 IPSTATS_MIB_ECT0PKTS,
 IPSTATS_MIB_CEPKTS,
 IPSTATS_MIB_REASM_OVERLAPS,
 IPSTATS_MIB_OUTPKTS,
 __IPSTATS_MIB_MAX
};






enum
{
 ICMP_MIB_NUM = 0,
 ICMP_MIB_INMSGS,
 ICMP_MIB_INERRORS,
 ICMP_MIB_INDESTUNREACHS,
 ICMP_MIB_INTIMEEXCDS,
 ICMP_MIB_INPARMPROBS,
 ICMP_MIB_INSRCQUENCHS,
 ICMP_MIB_INREDIRECTS,
 ICMP_MIB_INECHOS,
 ICMP_MIB_INECHOREPS,
 ICMP_MIB_INTIMESTAMPS,
 ICMP_MIB_INTIMESTAMPREPS,
 ICMP_MIB_INADDRMASKS,
 ICMP_MIB_INADDRMASKREPS,
 ICMP_MIB_OUTMSGS,
 ICMP_MIB_OUTERRORS,
 ICMP_MIB_OUTDESTUNREACHS,
 ICMP_MIB_OUTTIMEEXCDS,
 ICMP_MIB_OUTPARMPROBS,
 ICMP_MIB_OUTSRCQUENCHS,
 ICMP_MIB_OUTREDIRECTS,
 ICMP_MIB_OUTECHOS,
 ICMP_MIB_OUTECHOREPS,
 ICMP_MIB_OUTTIMESTAMPS,
 ICMP_MIB_OUTTIMESTAMPREPS,
 ICMP_MIB_OUTADDRMASKS,
 ICMP_MIB_OUTADDRMASKREPS,
 ICMP_MIB_CSUMERRORS,
 ICMP_MIB_RATELIMITGLOBAL,
 ICMP_MIB_RATELIMITHOST,
 __ICMP_MIB_MAX
};







enum
{
 ICMP6_MIB_NUM = 0,
 ICMP6_MIB_INMSGS,
 ICMP6_MIB_INERRORS,
 ICMP6_MIB_OUTMSGS,
 ICMP6_MIB_OUTERRORS,
 ICMP6_MIB_CSUMERRORS,
 ICMP6_MIB_RATELIMITHOST,
 __ICMP6_MIB_MAX
};
# 129 "../include/uapi/linux/snmp.h"
enum
{
 TCP_MIB_NUM = 0,
 TCP_MIB_RTOALGORITHM,
 TCP_MIB_RTOMIN,
 TCP_MIB_RTOMAX,
 TCP_MIB_MAXCONN,
 TCP_MIB_ACTIVEOPENS,
 TCP_MIB_PASSIVEOPENS,
 TCP_MIB_ATTEMPTFAILS,
 TCP_MIB_ESTABRESETS,
 TCP_MIB_CURRESTAB,
 TCP_MIB_INSEGS,
 TCP_MIB_OUTSEGS,
 TCP_MIB_RETRANSSEGS,
 TCP_MIB_INERRS,
 TCP_MIB_OUTRSTS,
 TCP_MIB_CSUMERRORS,
 __TCP_MIB_MAX
};






enum
{
 UDP_MIB_NUM = 0,
 UDP_MIB_INDATAGRAMS,
 UDP_MIB_NOPORTS,
 UDP_MIB_INERRORS,
 UDP_MIB_OUTDATAGRAMS,
 UDP_MIB_RCVBUFERRORS,
 UDP_MIB_SNDBUFERRORS,
 UDP_MIB_CSUMERRORS,
 UDP_MIB_IGNOREDMULTI,
 UDP_MIB_MEMERRORS,
 __UDP_MIB_MAX
};


enum
{
 LINUX_MIB_NUM = 0,
 LINUX_MIB_SYNCOOKIESSENT,
 LINUX_MIB_SYNCOOKIESRECV,
 LINUX_MIB_SYNCOOKIESFAILED,
 LINUX_MIB_EMBRYONICRSTS,
 LINUX_MIB_PRUNECALLED,
 LINUX_MIB_RCVPRUNED,
 LINUX_MIB_OFOPRUNED,
 LINUX_MIB_OUTOFWINDOWICMPS,
 LINUX_MIB_LOCKDROPPEDICMPS,
 LINUX_MIB_ARPFILTER,
 LINUX_MIB_TIMEWAITED,
 LINUX_MIB_TIMEWAITRECYCLED,
 LINUX_MIB_TIMEWAITKILLED,
 LINUX_MIB_PAWSACTIVEREJECTED,
 LINUX_MIB_PAWSESTABREJECTED,
 LINUX_MIB_DELAYEDACKS,
 LINUX_MIB_DELAYEDACKLOCKED,
 LINUX_MIB_DELAYEDACKLOST,
 LINUX_MIB_LISTENOVERFLOWS,
 LINUX_MIB_LISTENDROPS,
 LINUX_MIB_TCPHPHITS,
 LINUX_MIB_TCPPUREACKS,
 LINUX_MIB_TCPHPACKS,
 LINUX_MIB_TCPRENORECOVERY,
 LINUX_MIB_TCPSACKRECOVERY,
 LINUX_MIB_TCPSACKRENEGING,
 LINUX_MIB_TCPSACKREORDER,
 LINUX_MIB_TCPRENOREORDER,
 LINUX_MIB_TCPTSREORDER,
 LINUX_MIB_TCPFULLUNDO,
 LINUX_MIB_TCPPARTIALUNDO,
 LINUX_MIB_TCPDSACKUNDO,
 LINUX_MIB_TCPLOSSUNDO,
 LINUX_MIB_TCPLOSTRETRANSMIT,
 LINUX_MIB_TCPRENOFAILURES,
 LINUX_MIB_TCPSACKFAILURES,
 LINUX_MIB_TCPLOSSFAILURES,
 LINUX_MIB_TCPFASTRETRANS,
 LINUX_MIB_TCPSLOWSTARTRETRANS,
 LINUX_MIB_TCPTIMEOUTS,
 LINUX_MIB_TCPLOSSPROBES,
 LINUX_MIB_TCPLOSSPROBERECOVERY,
 LINUX_MIB_TCPRENORECOVERYFAIL,
 LINUX_MIB_TCPSACKRECOVERYFAIL,
 LINUX_MIB_TCPRCVCOLLAPSED,
 LINUX_MIB_TCPDSACKOLDSENT,
 LINUX_MIB_TCPDSACKOFOSENT,
 LINUX_MIB_TCPDSACKRECV,
 LINUX_MIB_TCPDSACKOFORECV,
 LINUX_MIB_TCPABORTONDATA,
 LINUX_MIB_TCPABORTONCLOSE,
 LINUX_MIB_TCPABORTONMEMORY,
 LINUX_MIB_TCPABORTONTIMEOUT,
 LINUX_MIB_TCPABORTONLINGER,
 LINUX_MIB_TCPABORTFAILED,
 LINUX_MIB_TCPMEMORYPRESSURES,
 LINUX_MIB_TCPMEMORYPRESSURESCHRONO,
 LINUX_MIB_TCPSACKDISCARD,
 LINUX_MIB_TCPDSACKIGNOREDOLD,
 LINUX_MIB_TCPDSACKIGNOREDNOUNDO,
 LINUX_MIB_TCPSPURIOUSRTOS,
 LINUX_MIB_TCPMD5NOTFOUND,
 LINUX_MIB_TCPMD5UNEXPECTED,
 LINUX_MIB_TCPMD5FAILURE,
 LINUX_MIB_SACKSHIFTED,
 LINUX_MIB_SACKMERGED,
 LINUX_MIB_SACKSHIFTFALLBACK,
 LINUX_MIB_TCPBACKLOGDROP,
 LINUX_MIB_PFMEMALLOCDROP,
 LINUX_MIB_TCPMINTTLDROP,
 LINUX_MIB_TCPDEFERACCEPTDROP,
 LINUX_MIB_IPRPFILTER,
 LINUX_MIB_TCPTIMEWAITOVERFLOW,
 LINUX_MIB_TCPREQQFULLDOCOOKIES,
 LINUX_MIB_TCPREQQFULLDROP,
 LINUX_MIB_TCPRETRANSFAIL,
 LINUX_MIB_TCPRCVCOALESCE,
 LINUX_MIB_TCPBACKLOGCOALESCE,
 LINUX_MIB_TCPOFOQUEUE,
 LINUX_MIB_TCPOFODROP,
 LINUX_MIB_TCPOFOMERGE,
 LINUX_MIB_TCPCHALLENGEACK,
 LINUX_MIB_TCPSYNCHALLENGE,
 LINUX_MIB_TCPFASTOPENACTIVE,
 LINUX_MIB_TCPFASTOPENACTIVEFAIL,
 LINUX_MIB_TCPFASTOPENPASSIVE,
 LINUX_MIB_TCPFASTOPENPASSIVEFAIL,
 LINUX_MIB_TCPFASTOPENLISTENOVERFLOW,
 LINUX_MIB_TCPFASTOPENCOOKIEREQD,
 LINUX_MIB_TCPFASTOPENBLACKHOLE,
 LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES,
 LINUX_MIB_BUSYPOLLRXPACKETS,
 LINUX_MIB_TCPAUTOCORKING,
 LINUX_MIB_TCPFROMZEROWINDOWADV,
 LINUX_MIB_TCPTOZEROWINDOWADV,
 LINUX_MIB_TCPWANTZEROWINDOWADV,
 LINUX_MIB_TCPSYNRETRANS,
 LINUX_MIB_TCPORIGDATASENT,
 LINUX_MIB_TCPHYSTARTTRAINDETECT,
 LINUX_MIB_TCPHYSTARTTRAINCWND,
 LINUX_MIB_TCPHYSTARTDELAYDETECT,
 LINUX_MIB_TCPHYSTARTDELAYCWND,
 LINUX_MIB_TCPACKSKIPPEDSYNRECV,
 LINUX_MIB_TCPACKSKIPPEDPAWS,
 LINUX_MIB_TCPACKSKIPPEDSEQ,
 LINUX_MIB_TCPACKSKIPPEDFINWAIT2,
 LINUX_MIB_TCPACKSKIPPEDTIMEWAIT,
 LINUX_MIB_TCPACKSKIPPEDCHALLENGE,
 LINUX_MIB_TCPWINPROBE,
 LINUX_MIB_TCPKEEPALIVE,
 LINUX_MIB_TCPMTUPFAIL,
 LINUX_MIB_TCPMTUPSUCCESS,
 LINUX_MIB_TCPDELIVERED,
 LINUX_MIB_TCPDELIVEREDCE,
 LINUX_MIB_TCPACKCOMPRESSED,
 LINUX_MIB_TCPZEROWINDOWDROP,
 LINUX_MIB_TCPRCVQDROP,
 LINUX_MIB_TCPWQUEUETOOBIG,
 LINUX_MIB_TCPFASTOPENPASSIVEALTKEY,
 LINUX_MIB_TCPTIMEOUTREHASH,
 LINUX_MIB_TCPDUPLICATEDATAREHASH,
 LINUX_MIB_TCPDSACKRECVSEGS,
 LINUX_MIB_TCPDSACKIGNOREDDUBIOUS,
 LINUX_MIB_TCPMIGRATEREQSUCCESS,
 LINUX_MIB_TCPMIGRATEREQFAILURE,
 LINUX_MIB_TCPPLBREHASH,
 LINUX_MIB_TCPAOREQUIRED,
 LINUX_MIB_TCPAOBAD,
 LINUX_MIB_TCPAOKEYNOTFOUND,
 LINUX_MIB_TCPAOGOOD,
 LINUX_MIB_TCPAODROPPEDICMPS,
 __LINUX_MIB_MAX
};


enum
{
 LINUX_MIB_XFRMNUM = 0,
 LINUX_MIB_XFRMINERROR,
 LINUX_MIB_XFRMINBUFFERERROR,
 LINUX_MIB_XFRMINHDRERROR,
 LINUX_MIB_XFRMINNOSTATES,
 LINUX_MIB_XFRMINSTATEPROTOERROR,
 LINUX_MIB_XFRMINSTATEMODEERROR,
 LINUX_MIB_XFRMINSTATESEQERROR,
 LINUX_MIB_XFRMINSTATEEXPIRED,
 LINUX_MIB_XFRMINSTATEMISMATCH,
 LINUX_MIB_XFRMINSTATEINVALID,
 LINUX_MIB_XFRMINTMPLMISMATCH,
 LINUX_MIB_XFRMINNOPOLS,
 LINUX_MIB_XFRMINPOLBLOCK,
 LINUX_MIB_XFRMINPOLERROR,
 LINUX_MIB_XFRMOUTERROR,
 LINUX_MIB_XFRMOUTBUNDLEGENERROR,
 LINUX_MIB_XFRMOUTBUNDLECHECKERROR,
 LINUX_MIB_XFRMOUTNOSTATES,
 LINUX_MIB_XFRMOUTSTATEPROTOERROR,
 LINUX_MIB_XFRMOUTSTATEMODEERROR,
 LINUX_MIB_XFRMOUTSTATESEQERROR,
 LINUX_MIB_XFRMOUTSTATEEXPIRED,
 LINUX_MIB_XFRMOUTPOLBLOCK,
 LINUX_MIB_XFRMOUTPOLDEAD,
 LINUX_MIB_XFRMOUTPOLERROR,
 LINUX_MIB_XFRMFWDHDRERROR,
 LINUX_MIB_XFRMOUTSTATEINVALID,
 LINUX_MIB_XFRMACQUIREERROR,
 LINUX_MIB_XFRMOUTSTATEDIRERROR,
 LINUX_MIB_XFRMINSTATEDIRERROR,
 __LINUX_MIB_XFRMMAX
};


enum
{
 LINUX_MIB_TLSNUM = 0,
 LINUX_MIB_TLSCURRTXSW,
 LINUX_MIB_TLSCURRRXSW,
 LINUX_MIB_TLSCURRTXDEVICE,
 LINUX_MIB_TLSCURRRXDEVICE,
 LINUX_MIB_TLSTXSW,
 LINUX_MIB_TLSRXSW,
 LINUX_MIB_TLSTXDEVICE,
 LINUX_MIB_TLSRXDEVICE,
 LINUX_MIB_TLSDECRYPTERROR,
 LINUX_MIB_TLSRXDEVICERESYNC,
 LINUX_MIB_TLSDECRYPTRETRY,
 LINUX_MIB_TLSRXNOPADVIOL,
 __LINUX_MIB_TLSMAX
};
# 19 "../include/net/snmp.h" 2
# 29 "../include/net/snmp.h"
struct snmp_mib {
 const char *name;
 int entry;
};
# 51 "../include/net/snmp.h"
struct ipstats_mib {

 u64 mibs[__IPSTATS_MIB_MAX];
 struct u64_stats_sync syncp;
};



struct icmp_mib {
 unsigned long mibs[__ICMP_MIB_MAX];
};


struct icmpmsg_mib {
 atomic_long_t mibs[512];
};




struct icmpv6_mib {
 unsigned long mibs[__ICMP6_MIB_MAX];
};

struct icmpv6_mib_device {
 atomic_long_t mibs[__ICMP6_MIB_MAX];
};



struct icmpv6msg_mib {
 atomic_long_t mibs[512];
};

struct icmpv6msg_mib_device {
 atomic_long_t mibs[512];
};




struct tcp_mib {
 unsigned long mibs[__TCP_MIB_MAX];
};



struct udp_mib {
 unsigned long mibs[__UDP_MIB_MAX];
};



struct linux_mib {
 unsigned long mibs[__LINUX_MIB_MAX];
};



struct linux_xfrm_mib {
 unsigned long mibs[__LINUX_MIB_XFRMMAX];
};



struct linux_tls_mib {
 unsigned long mibs[__LINUX_MIB_TLSMAX];
};
# 6 "../include/net/netns/mib.h" 2

struct netns_mib {
 __typeof__(struct ipstats_mib) *ip_statistics;

 __typeof__(struct ipstats_mib) *ipv6_statistics;


 __typeof__(struct tcp_mib) *tcp_statistics;
 __typeof__(struct linux_mib) *net_statistics;

 __typeof__(struct udp_mib) *udp_statistics;

 __typeof__(struct udp_mib) *udp_stats_in6;






 __typeof__(struct linux_tls_mib) *tls_statistics;





 __typeof__(struct udp_mib) *udplite_statistics;

 __typeof__(struct udp_mib) *udplite_stats_in6;


 __typeof__(struct icmp_mib) *icmp_statistics;
 __typeof__(struct icmpmsg_mib) *icmpmsg_statistics;

 __typeof__(struct icmpv6_mib) *icmpv6_statistics;
 __typeof__(struct icmpv6msg_mib) *icmpv6msg_statistics;
 struct proc_dir_entry *proc_net_devsnmp6;

};
# 18 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/unix.h" 1
# 10 "../include/net/netns/unix.h"
struct unix_table {
 spinlock_t *locks;
 struct hlist_head *buckets;
};

struct ctl_table_header;
struct netns_unix {
 struct unix_table table;
 int sysctl_max_dgram_qlen;
 struct ctl_table_header *ctl;
};
# 19 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/packet.h" 1
# 11 "../include/net/netns/packet.h"
struct netns_packet {
 struct mutex sklist_lock;
 struct hlist_head sklist;
};
# 20 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/ipv4.h" 1
# 10 "../include/net/netns/ipv4.h"
# 1 "../include/net/inet_frag.h" 1
# 13 "../include/net/inet_frag.h"
struct fqdir {

 long high_thresh;
 long low_thresh;
 int timeout;
 int max_dist;
 struct inet_frags *f;
 struct net *net;
 bool dead;

 struct rhashtable rhashtable ;


 atomic_long_t mem ;
 struct work_struct destroy_work;
 struct llist_node free_list;
};
# 40 "../include/net/inet_frag.h"
enum {
 INET_FRAG_FIRST_IN = ((((1UL))) << (0)),
 INET_FRAG_LAST_IN = ((((1UL))) << (1)),
 INET_FRAG_COMPLETE = ((((1UL))) << (2)),
 INET_FRAG_HASH_DEAD = ((((1UL))) << (3)),
 INET_FRAG_DROP = ((((1UL))) << (4)),
};

struct frag_v4_compare_key {
 __be32 saddr;
 __be32 daddr;
 u32 user;
 u32 vif;
 __be16 id;
 u16 protocol;
};

struct frag_v6_compare_key {
 struct in6_addr saddr;
 struct in6_addr daddr;
 u32 user;
 __be32 id;
 u32 iif;
};
# 85 "../include/net/inet_frag.h"
struct inet_frag_queue {
 struct rhash_head node;
 union {
  struct frag_v4_compare_key v4;
  struct frag_v6_compare_key v6;
 } key;
 struct timer_list timer;
 spinlock_t lock;
 refcount_t refcnt;
 struct rb_root rb_fragments;
 struct sk_buff *fragments_tail;
 struct sk_buff *last_run_head;
 ktime_t stamp;
 int len;
 int meat;
 u8 tstamp_type;
 __u8 flags;
 u16 max_size;
 struct fqdir *fqdir;
 struct callback_head rcu;
};

struct inet_frags {
 unsigned int qsize;

 void (*constructor)(struct inet_frag_queue *q,
            const void *arg);
 void (*destructor)(struct inet_frag_queue *);
 void (*frag_expire)(struct timer_list *t);
 struct kmem_cache *frags_cachep;
 const char *frags_cache_name;
 struct rhashtable_params rhash_params;
 refcount_t refcnt;
 struct completion completion;
};

int inet_frags_init(struct inet_frags *);
void inet_frags_fini(struct inet_frags *);

int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fqdir_pre_exit(struct fqdir *fqdir)
{



 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_330(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(fqdir->high_thresh) == sizeof(char) || sizeof(fqdir->high_thresh) == sizeof(short) || sizeof(fqdir->high_thresh) == sizeof(int) || sizeof(fqdir->high_thresh) == sizeof(long)) || sizeof(fqdir->high_thresh) == sizeof(long long))) __compiletime_assert_330(); } while (0); do { *(volatile typeof(fqdir->high_thresh) *)&(fqdir->high_thresh) = (0); } while (0); } while (0);




 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_331(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(fqdir->dead) == sizeof(char) || sizeof(fqdir->dead) == sizeof(short) || sizeof(fqdir->dead) == sizeof(int) || sizeof(fqdir->dead) == sizeof(long)) || sizeof(fqdir->dead) == sizeof(long long))) __compiletime_assert_331(); } while (0); do { *(volatile typeof(fqdir->dead) *)&(fqdir->dead) = (true); } while (0); } while (0);
}
void fqdir_exit(struct fqdir *fqdir);

void inet_frag_kill(struct inet_frag_queue *q);
void inet_frag_destroy(struct inet_frag_queue *q);
struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key);


unsigned int inet_frag_rbtree_purge(struct rb_root *root,
        enum skb_drop_reason reason);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_frag_put(struct inet_frag_queue *q)
{
 if (refcount_dec_and_test(&q->refcnt))
  inet_frag_destroy(q);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long frag_mem_limit(const struct fqdir *fqdir)
{
 return atomic_long_read(&fqdir->mem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sub_frag_mem_limit(struct fqdir *fqdir, long val)
{
 atomic_long_sub(val, &fqdir->mem);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void add_frag_mem_limit(struct fqdir *fqdir, long val)
{
 atomic_long_add(val, &fqdir->mem);
}
# 180 "../include/net/inet_frag.h"
extern const u8 ip_frag_ecn_table[16];





int inet_frag_queue_insert(struct inet_frag_queue *q, struct sk_buff *skb,
      int offset, int end);
void *inet_frag_reasm_prepare(struct inet_frag_queue *q, struct sk_buff *skb,
         struct sk_buff *parent);
void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head,
       void *reasm_data, bool try_coalesce);
struct sk_buff *inet_frag_pull_head(struct inet_frag_queue *q);
# 11 "../include/net/netns/ipv4.h" 2




struct ctl_table_header;
struct ipv4_devconf;
struct fib_rules_ops;
struct hlist_head;
struct fib_table;
struct sock;
struct local_ports {
 u32 range;
 bool warned;
};

struct ping_group_range {
 seqlock_t lock;
 kgid_t range[2];
};

struct inet_hashinfo;

struct inet_timewait_death_row {
 refcount_t tw_refcount;


 struct inet_hashinfo *hashinfo ;
 int sysctl_max_tw_buckets;
};

struct tcp_fastopen_context;
# 50 "../include/net/netns/ipv4.h"
struct netns_ipv4 {






 __u8 __cacheline_group_begin__netns_ipv4_read_tx[0];
 u8 sysctl_tcp_early_retrans;
 u8 sysctl_tcp_tso_win_divisor;
 u8 sysctl_tcp_tso_rtt_log;
 u8 sysctl_tcp_autocorking;
 int sysctl_tcp_min_snd_mss;
 unsigned int sysctl_tcp_notsent_lowat;
 int sysctl_tcp_limit_output_bytes;
 int sysctl_tcp_min_rtt_wlen;
 int sysctl_tcp_wmem[3];
 u8 sysctl_ip_fwd_use_pmtu;
 __u8 __cacheline_group_end__netns_ipv4_read_tx[0];


 __u8 __cacheline_group_begin__netns_ipv4_read_txrx[0];
 u8 sysctl_tcp_moderate_rcvbuf;
 __u8 __cacheline_group_end__netns_ipv4_read_txrx[0];


 __u8 __cacheline_group_begin__netns_ipv4_read_rx[0];
 u8 sysctl_ip_early_demux;
 u8 sysctl_tcp_early_demux;
 int sysctl_tcp_reordering;
 int sysctl_tcp_rmem[3];
 __u8 __cacheline_group_end__netns_ipv4_read_rx[0];

 struct inet_timewait_death_row tcp_death_row;
 struct udp_table *udp_table;


 struct ctl_table_header *forw_hdr;
 struct ctl_table_header *frags_hdr;
 struct ctl_table_header *ipv4_hdr;
 struct ctl_table_header *route_hdr;
 struct ctl_table_header *xfrm4_hdr;

 struct ipv4_devconf *devconf_all;
 struct ipv4_devconf *devconf_dflt;
 struct ip_ra_chain *ra_chain;
 struct mutex ra_mutex;







 bool fib_has_custom_local_routes;
 bool fib_offload_disabled;
 u8 sysctl_tcp_shrink_window;

 atomic_t fib_num_tclassid_users;

 struct hlist_head *fib_table_hash;
 struct sock *fibnl;

 struct sock *mc_autojoin_sk;

 struct inet_peer_base *peers;
 struct fqdir *fqdir;

 u8 sysctl_icmp_echo_ignore_all;
 u8 sysctl_icmp_echo_enable_probe;
 u8 sysctl_icmp_echo_ignore_broadcasts;
 u8 sysctl_icmp_ignore_bogus_error_responses;
 u8 sysctl_icmp_errors_use_inbound_ifaddr;
 int sysctl_icmp_ratelimit;
 int sysctl_icmp_ratemask;

 u32 ip_rt_min_pmtu;
 int ip_rt_mtu_expires;
 int ip_rt_min_advmss;

 struct local_ports ip_local_ports;

 u8 sysctl_tcp_ecn;
 u8 sysctl_tcp_ecn_fallback;

 u8 sysctl_ip_default_ttl;
 u8 sysctl_ip_no_pmtu_disc;
 u8 sysctl_ip_fwd_update_priority;
 u8 sysctl_ip_nonlocal_bind;
 u8 sysctl_ip_autobind_reuse;

 u8 sysctl_ip_dynaddr;



 u8 sysctl_udp_early_demux;

 u8 sysctl_nexthop_compat_mode;

 u8 sysctl_fwmark_reflect;
 u8 sysctl_tcp_fwmark_accept;



 u8 sysctl_tcp_mtu_probing;
 int sysctl_tcp_mtu_probe_floor;
 int sysctl_tcp_base_mss;
 int sysctl_tcp_probe_threshold;
 u32 sysctl_tcp_probe_interval;

 int sysctl_tcp_keepalive_time;
 int sysctl_tcp_keepalive_intvl;
 u8 sysctl_tcp_keepalive_probes;

 u8 sysctl_tcp_syn_retries;
 u8 sysctl_tcp_synack_retries;
 u8 sysctl_tcp_syncookies;
 u8 sysctl_tcp_migrate_req;
 u8 sysctl_tcp_comp_sack_nr;
 u8 sysctl_tcp_backlog_ack_defer;
 u8 sysctl_tcp_pingpong_thresh;

 u8 sysctl_tcp_retries1;
 u8 sysctl_tcp_retries2;
 u8 sysctl_tcp_orphan_retries;
 u8 sysctl_tcp_tw_reuse;
 int sysctl_tcp_fin_timeout;
 u8 sysctl_tcp_sack;
 u8 sysctl_tcp_window_scaling;
 u8 sysctl_tcp_timestamps;
 int sysctl_tcp_rto_min_us;
 u8 sysctl_tcp_recovery;
 u8 sysctl_tcp_thin_linear_timeouts;
 u8 sysctl_tcp_slow_start_after_idle;
 u8 sysctl_tcp_retrans_collapse;
 u8 sysctl_tcp_stdurg;
 u8 sysctl_tcp_rfc1337;
 u8 sysctl_tcp_abort_on_overflow;
 u8 sysctl_tcp_fack;
 int sysctl_tcp_max_reordering;
 int sysctl_tcp_adv_win_scale;
 u8 sysctl_tcp_dsack;
 u8 sysctl_tcp_app_win;
 u8 sysctl_tcp_frto;
 u8 sysctl_tcp_nometrics_save;
 u8 sysctl_tcp_no_ssthresh_metrics_save;
 u8 sysctl_tcp_workaround_signed_windows;
 int sysctl_tcp_challenge_ack_limit;
 u8 sysctl_tcp_min_tso_segs;
 u8 sysctl_tcp_reflect_tos;
 int sysctl_tcp_invalid_ratelimit;
 int sysctl_tcp_pacing_ss_ratio;
 int sysctl_tcp_pacing_ca_ratio;
 unsigned int sysctl_tcp_child_ehash_entries;
 unsigned long sysctl_tcp_comp_sack_delay_ns;
 unsigned long sysctl_tcp_comp_sack_slack_ns;
 int sysctl_max_syn_backlog;
 int sysctl_tcp_fastopen;
 const struct tcp_congestion_ops *tcp_congestion_control;
 struct tcp_fastopen_context *tcp_fastopen_ctx;
 unsigned int sysctl_tcp_fastopen_blackhole_timeout;
 atomic_t tfo_active_disable_times;
 unsigned long tfo_active_disable_stamp;
 u32 tcp_challenge_timestamp;
 u32 tcp_challenge_count;
 u8 sysctl_tcp_plb_enabled;
 u8 sysctl_tcp_plb_idle_rehash_rounds;
 u8 sysctl_tcp_plb_rehash_rounds;
 u8 sysctl_tcp_plb_suspend_rto_sec;
 int sysctl_tcp_plb_cong_thresh;

 int sysctl_udp_wmem_min;
 int sysctl_udp_rmem_min;

 u8 sysctl_fib_notify_on_flag_change;
 u8 sysctl_tcp_syn_linear_timeouts;





 u8 sysctl_igmp_llm_reports;
 int sysctl_igmp_max_memberships;
 int sysctl_igmp_max_msf;
 int sysctl_igmp_qrv;

 struct ping_group_range ping_group_range;

 atomic_t dev_addr_genid;

 unsigned int sysctl_udp_child_hash_entries;


 unsigned long *sysctl_local_reserved_ports;
 int sysctl_ip_prot_sock;
# 262 "../include/net/netns/ipv4.h"
 struct fib_notifier_ops *notifier_ops;
 unsigned int fib_seq;

 struct fib_notifier_ops *ipmr_notifier_ops;
 unsigned int ipmr_seq;

 atomic_t rt_genid;
 siphash_key_t ip_id_key;
};
# 21 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/ipv6.h" 1
# 10 "../include/net/netns/ipv6.h"
# 1 "../include/net/dst_ops.h" 1







struct dst_entry;
struct kmem_cachep;
struct net_device;
struct sk_buff;
struct sock;
struct net;

struct dst_ops {
 unsigned short family;
 unsigned int gc_thresh;

 void (*gc)(struct dst_ops *ops);
 struct dst_entry * (*check)(struct dst_entry *, __u32 cookie);
 unsigned int (*default_advmss)(const struct dst_entry *);
 unsigned int (*mtu)(const struct dst_entry *);
 u32 * (*cow_metrics)(struct dst_entry *, unsigned long);
 void (*destroy)(struct dst_entry *);
 void (*ifdown)(struct dst_entry *,
       struct net_device *dev);
 void (*negative_advice)(struct sock *sk, struct dst_entry *);
 void (*link_failure)(struct sk_buff *);
 void (*update_pmtu)(struct dst_entry *dst, struct sock *sk,
            struct sk_buff *skb, u32 mtu,
            bool confirm_neigh);
 void (*redirect)(struct dst_entry *dst, struct sock *sk,
         struct sk_buff *skb);
 int (*local_out)(struct net *net, struct sock *sk, struct sk_buff *skb);
 struct neighbour * (*neigh_lookup)(const struct dst_entry *dst,
      struct sk_buff *skb,
      const void *daddr);
 void (*confirm_neigh)(const struct dst_entry *dst,
       const void *daddr);

 struct kmem_cache *kmem_cachep;

 struct percpu_counter pcpuc_entries ;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_entries_get_fast(struct dst_ops *dst)
{
 return percpu_counter_read_positive(&dst->pcpuc_entries);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_entries_get_slow(struct dst_ops *dst)
{
 return percpu_counter_sum_positive(&dst->pcpuc_entries);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_entries_add(struct dst_ops *dst, int val)
{
 percpu_counter_add_batch(&dst->pcpuc_entries, val,
     32);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_entries_init(struct dst_ops *dst)
{
 return percpu_counter_init(&dst->pcpuc_entries, 0, ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT)))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_entries_destroy(struct dst_ops *dst)
{
 percpu_counter_destroy(&dst->pcpuc_entries);
}
# 11 "../include/net/netns/ipv6.h" 2
# 1 "../include/uapi/linux/icmpv6.h" 1







struct icmp6hdr {

 __u8 icmp6_type;
 __u8 icmp6_code;
 __sum16 icmp6_cksum;


 union {
  __be32 un_data32[1];
  __be16 un_data16[2];
  __u8 un_data8[4];

  struct icmpv6_echo {
   __be16 identifier;
   __be16 sequence;
  } u_echo;

                struct icmpv6_nd_advt {

                        __u32 reserved:5,
                          override:1,
                          solicited:1,
                          router:1,
     reserved2:24;
# 40 "../include/uapi/linux/icmpv6.h"
                } u_nd_advt;

                struct icmpv6_nd_ra {
   __u8 hop_limit;

   __u8 reserved:3,
     router_pref:2,
     home_agent:1,
     other:1,
     managed:1;
# 60 "../include/uapi/linux/icmpv6.h"
   __be16 rt_lifetime;
                } u_nd_ra;

 } icmp6_dataun;
# 81 "../include/uapi/linux/icmpv6.h"
};
# 162 "../include/uapi/linux/icmpv6.h"
struct icmp6_filter {
 __u32 data[8];
};
# 12 "../include/net/netns/ipv6.h" 2

struct ctl_table_header;

struct netns_sysctl_ipv6 {

 struct ctl_table_header *hdr;
 struct ctl_table_header *route_hdr;
 struct ctl_table_header *icmp_hdr;
 struct ctl_table_header *frags_hdr;
 struct ctl_table_header *xfrm6_hdr;

 int flush_delay;
 int ip6_rt_max_size;
 int ip6_rt_gc_min_interval;
 int ip6_rt_gc_timeout;
 int ip6_rt_gc_interval;
 int ip6_rt_gc_elasticity;
 int ip6_rt_mtu_expires;
 int ip6_rt_min_advmss;
 u32 multipath_hash_fields;
 u8 multipath_hash_policy;
 u8 bindv6only;
 u8 flowlabel_consistency;
 u8 auto_flowlabels;
 int icmpv6_time;
 u8 icmpv6_echo_ignore_all;
 u8 icmpv6_echo_ignore_multicast;
 u8 icmpv6_echo_ignore_anycast;
 unsigned long icmpv6_ratemask[(((255 + 1) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
 unsigned long *icmpv6_ratemask_ptr;
 u8 anycast_src_echo_reply;
 u8 ip_nonlocal_bind;
 u8 fwmark_reflect;
 u8 flowlabel_state_ranges;
 int idgen_retries;
 int idgen_delay;
 int flowlabel_reflect;
 int max_dst_opts_cnt;
 int max_hbh_opts_cnt;
 int max_dst_opts_len;
 int max_hbh_opts_len;
 int seg6_flowlabel;
 u32 ioam6_id;
 u64 ioam6_id_wide;
 u8 skip_notify_on_dev_down;
 u8 fib_notify_on_flag_change;
 u8 icmpv6_error_anycast_as_unicast;
};

struct netns_ipv6 {

 struct dst_ops ip6_dst_ops;

 struct netns_sysctl_ipv6 sysctl;
 struct ipv6_devconf *devconf_all;
 struct ipv6_devconf *devconf_dflt;
 struct inet_peer_base *peers;
 struct fqdir *fqdir;
 struct fib6_info *fib6_null_entry;
 struct rt6_info *ip6_null_entry;
 struct rt6_statistics *rt6_stats;
 struct timer_list ip6_fib_timer;
 struct hlist_head *fib_table_hash;
 struct fib6_table *fib6_main_tbl;
 struct list_head fib6_walkers;
 rwlock_t fib6_walker_lock;
 spinlock_t fib6_gc_lock;
 atomic_t ip6_rt_gc_expire;
 unsigned long ip6_rt_last_gc;
 unsigned char flowlabel_has_excl;
# 93 "../include/net/netns/ipv6.h"
 struct sock *ndisc_sk;
 struct sock *tcp_sk;
 struct sock *igmp_sk;
 struct sock *mc_autojoin_sk;

 struct hlist_head *inet6_addr_lst;
 spinlock_t addrconf_hash_lock;
 struct delayed_work addr_chk_work;
# 110 "../include/net/netns/ipv6.h"
 atomic_t dev_addr_genid;
 atomic_t fib6_sernum;
 struct seg6_pernet_data *seg6_data;
 struct fib_notifier_ops *notifier_ops;
 struct fib_notifier_ops *ip6mr_notifier_ops;
 unsigned int ipmr_seq;
 struct {
  struct hlist_head head;
  spinlock_t lock;
  u32 seq;
 } ip6addrlbl_table;
 struct ioam6_pernet_data *ioam6_data;
};
# 22 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/nexthop.h" 1
# 12 "../include/net/netns/nexthop.h"
struct netns_nexthop {
 struct rb_root rb_root;
 struct hlist_head *devhash;

 unsigned int seq;
 u32 last_id_allocated;
 struct blocking_notifier_head notifier_chain;
};
# 23 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/ieee802154_6lowpan.h" 1
# 11 "../include/net/netns/ieee802154_6lowpan.h"
struct netns_sysctl_lowpan {

 struct ctl_table_header *frags_hdr;

};

struct netns_ieee802154_lowpan {
 struct netns_sysctl_lowpan sysctl;
 struct fqdir *fqdir;
};
# 24 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/sctp.h" 1







struct sock;
struct proc_dir_entry;
struct sctp_mib;
struct ctl_table_header;

struct netns_sctp {
 __typeof__(struct sctp_mib) *sctp_statistics;


 struct proc_dir_entry *proc_net_sctp;


 struct ctl_table_header *sysctl_header;





 struct sock *ctl_sock;


 struct sock *udp4_sock;
 struct sock *udp6_sock;

 int udp_port;

 int encap_port;







 struct list_head local_addr_list;
 struct list_head addr_waitq;
 struct timer_list addr_wq_timer;
 struct list_head auto_asconf_splist;

 spinlock_t addr_wq_lock;


 spinlock_t local_addr_lock;
# 62 "../include/net/netns/sctp.h"
 unsigned int rto_initial;
 unsigned int rto_min;
 unsigned int rto_max;




 int rto_alpha;
 int rto_beta;


 int max_burst;


 int cookie_preserve_enable;


 char *sctp_hmac_alg;


 unsigned int valid_cookie_life;


 unsigned int sack_timeout;


 unsigned int hb_interval;


 unsigned int probe_interval;





 int max_retrans_association;
 int max_retrans_path;
 int max_retrans_init;




 int pf_retrans;





 int ps_retrans;






 int pf_enable;







 int pf_expose;






 int sndbuf_policy;






 int rcvbuf_policy;

 int default_auto_asconf;


 int addip_enable;
 int addip_noauth;


 int prsctp_enable;


 int reconf_enable;


 int auth_enable;


 int intl_enable;


 int ecn_enable;
# 169 "../include/net/netns/sctp.h"
 int scope_policy;




 int rwnd_upd_shift;


 unsigned long max_autoclose;




};
# 25 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/netfilter.h" 1




# 1 "../include/linux/netfilter_defs.h" 1




# 1 "../include/uapi/linux/netfilter.h" 1






# 1 "../include/linux/in.h" 1
# 19 "../include/linux/in.h"
# 1 "../include/uapi/linux/in.h" 1
# 29 "../include/uapi/linux/in.h"
enum {
  IPPROTO_IP = 0,

  IPPROTO_ICMP = 1,

  IPPROTO_IGMP = 2,

  IPPROTO_IPIP = 4,

  IPPROTO_TCP = 6,

  IPPROTO_EGP = 8,

  IPPROTO_PUP = 12,

  IPPROTO_UDP = 17,

  IPPROTO_IDP = 22,

  IPPROTO_TP = 29,

  IPPROTO_DCCP = 33,

  IPPROTO_IPV6 = 41,

  IPPROTO_RSVP = 46,

  IPPROTO_GRE = 47,

  IPPROTO_ESP = 50,

  IPPROTO_AH = 51,

  IPPROTO_MTP = 92,

  IPPROTO_BEETPH = 94,

  IPPROTO_ENCAP = 98,

  IPPROTO_PIM = 103,

  IPPROTO_COMP = 108,

  IPPROTO_L2TP = 115,

  IPPROTO_SCTP = 132,

  IPPROTO_UDPLITE = 136,

  IPPROTO_MPLS = 137,

  IPPROTO_ETHERNET = 143,

  IPPROTO_RAW = 255,

  IPPROTO_SMC = 256,

  IPPROTO_MPTCP = 262,

  IPPROTO_MAX
};




struct in_addr {
 __be32 s_addr;
};
# 180 "../include/uapi/linux/in.h"
struct ip_mreq {
 struct in_addr imr_multiaddr;
 struct in_addr imr_interface;
};

struct ip_mreqn {
 struct in_addr imr_multiaddr;
 struct in_addr imr_address;
 int imr_ifindex;
};

struct ip_mreq_source {
 __be32 imr_multiaddr;
 __be32 imr_interface;
 __be32 imr_sourceaddr;
};

struct ip_msfilter {
 __be32 imsf_multiaddr;
 __be32 imsf_interface;
 __u32 imsf_fmode;
 __u32 imsf_numsrc;
 union {
  __be32 imsf_slist[1];
  struct { struct { } __empty_imsf_slist_flex; __be32 imsf_slist_flex[]; };
 };
};





struct group_req {
 __u32 gr_interface;
 struct __kernel_sockaddr_storage gr_group;
};

struct group_source_req {
 __u32 gsr_interface;
 struct __kernel_sockaddr_storage gsr_group;
 struct __kernel_sockaddr_storage gsr_source;
};

struct group_filter {
 union {
  struct {
   __u32 gf_interface_aux;
   struct __kernel_sockaddr_storage gf_group_aux;
   __u32 gf_fmode_aux;
   __u32 gf_numsrc_aux;
   struct __kernel_sockaddr_storage gf_slist[1];
  };
  struct {
   __u32 gf_interface;
   struct __kernel_sockaddr_storage gf_group;
   __u32 gf_fmode;
   __u32 gf_numsrc;
   struct __kernel_sockaddr_storage gf_slist_flex[];
  };
 };
};







struct in_pktinfo {
 int ipi_ifindex;
 struct in_addr ipi_spec_dst;
 struct in_addr ipi_addr;
};





struct sockaddr_in {
  __kernel_sa_family_t sin_family;
  __be16 sin_port;
  struct in_addr sin_addr;


  unsigned char __pad[16 - sizeof(short int) -
   sizeof(unsigned short int) - sizeof(struct in_addr)];
};
# 20 "../include/linux/in.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int proto_ports_offset(int proto)
{
 switch (proto) {
 case IPPROTO_TCP:
 case IPPROTO_UDP:
 case IPPROTO_DCCP:
 case IPPROTO_ESP:
 case IPPROTO_SCTP:
 case IPPROTO_UDPLITE:
  return 0;
 case IPPROTO_AH:
  return 4;
 default:
  return -22;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_loopback(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xff000000)) ? ((__u32)( (((__u32)((0xff000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xff000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xff000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xff000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xff000000))))) == (( __be32)(__u32)(__builtin_constant_p((0x7f000000)) ? ((__u32)( (((__u32)((0x7f000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x7f000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x7f000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x7f000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x7f000000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_multicast(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xf0000000)) ? ((__u32)( (((__u32)((0xf0000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xf0000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xf0000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xf0000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xf0000000))))) == (( __be32)(__u32)(__builtin_constant_p((0xe0000000)) ? ((__u32)( (((__u32)((0xe0000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xe0000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xe0000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xe0000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xe0000000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_local_multicast(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xffffff00)) ? ((__u32)( (((__u32)((0xffffff00)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xffffff00)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xffffff00)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xffffff00)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xffffff00))))) == (( __be32)(__u32)(__builtin_constant_p((0xe0000000)) ? ((__u32)( (((__u32)((0xe0000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xe0000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xe0000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xe0000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xe0000000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_lbcast(__be32 addr)
{

 return addr == (( __be32)(__u32)(__builtin_constant_p((((unsigned long int) 0xffffffff))) ? ((__u32)( (((__u32)((((unsigned long int) 0xffffffff))) & (__u32)0x000000ffUL) << 24) | (((__u32)((((unsigned long int) 0xffffffff))) & (__u32)0x0000ff00UL) << 8) | (((__u32)((((unsigned long int) 0xffffffff))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((((unsigned long int) 0xffffffff))) & (__u32)0xff000000UL) >> 24))) : __fswab32((((unsigned long int) 0xffffffff)))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_all_snoopers(__be32 addr)
{
 return addr == (( __be32)(__u32)(__builtin_constant_p((0xe000006aU)) ? ((__u32)( (((__u32)((0xe000006aU)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xe000006aU)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xe000006aU)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xe000006aU)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xe000006aU))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_zeronet(__be32 addr)
{
 return (addr == 0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_private_10(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xff000000)) ? ((__u32)( (((__u32)((0xff000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xff000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xff000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xff000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xff000000))))) == (( __be32)(__u32)(__builtin_constant_p((0x0a000000)) ? ((__u32)( (((__u32)((0x0a000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0a000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0a000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0a000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0a000000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_private_172(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xfff00000)) ? ((__u32)( (((__u32)((0xfff00000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xfff00000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xfff00000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xfff00000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xfff00000))))) == (( __be32)(__u32)(__builtin_constant_p((0xac100000)) ? ((__u32)( (((__u32)((0xac100000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xac100000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xac100000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xac100000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xac100000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_private_192(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xffff0000)) ? ((__u32)( (((__u32)((0xffff0000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xffff0000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xffff0000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xffff0000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xffff0000))))) == (( __be32)(__u32)(__builtin_constant_p((0xc0a80000)) ? ((__u32)( (((__u32)((0xc0a80000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xc0a80000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xc0a80000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xc0a80000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xc0a80000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_linklocal_169(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xffff0000)) ? ((__u32)( (((__u32)((0xffff0000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xffff0000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xffff0000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xffff0000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xffff0000))))) == (( __be32)(__u32)(__builtin_constant_p((0xa9fe0000)) ? ((__u32)( (((__u32)((0xa9fe0000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xa9fe0000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xa9fe0000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xa9fe0000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xa9fe0000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_anycast_6to4(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xffffff00)) ? ((__u32)( (((__u32)((0xffffff00)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xffffff00)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xffffff00)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xffffff00)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xffffff00))))) == (( __be32)(__u32)(__builtin_constant_p((0xc0586300)) ? ((__u32)( (((__u32)((0xc0586300)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xc0586300)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xc0586300)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xc0586300)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xc0586300))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_test_192(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xffffff00)) ? ((__u32)( (((__u32)((0xffffff00)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xffffff00)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xffffff00)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xffffff00)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xffffff00))))) == (( __be32)(__u32)(__builtin_constant_p((0xc0000200)) ? ((__u32)( (((__u32)((0xc0000200)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xc0000200)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xc0000200)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xc0000200)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xc0000200))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_is_test_198(__be32 addr)
{
 return (addr & (( __be32)(__u32)(__builtin_constant_p((0xfffe0000)) ? ((__u32)( (((__u32)((0xfffe0000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xfffe0000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xfffe0000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xfffe0000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xfffe0000))))) == (( __be32)(__u32)(__builtin_constant_p((0xc6120000)) ? ((__u32)( (((__u32)((0xc6120000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xc6120000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xc6120000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xc6120000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xc6120000))));
}
# 8 "../include/uapi/linux/netfilter.h" 2
# 42 "../include/uapi/linux/netfilter.h"
enum nf_inet_hooks {
 NF_INET_PRE_ROUTING,
 NF_INET_LOCAL_IN,
 NF_INET_FORWARD,
 NF_INET_LOCAL_OUT,
 NF_INET_POST_ROUTING,
 NF_INET_NUMHOOKS,
 NF_INET_INGRESS = NF_INET_NUMHOOKS,
};

enum nf_dev_hooks {
 NF_NETDEV_INGRESS,
 NF_NETDEV_EGRESS,
 NF_NETDEV_NUMHOOKS
};

enum {
 NFPROTO_UNSPEC = 0,
 NFPROTO_INET = 1,
 NFPROTO_IPV4 = 2,
 NFPROTO_ARP = 3,
 NFPROTO_NETDEV = 5,
 NFPROTO_BRIDGE = 7,
 NFPROTO_IPV6 = 10,



 NFPROTO_NUMPROTO,
};

union nf_inet_addr {
 __u32 all[4];
 __be32 ip;
 __be32 ip6[4];
 struct in_addr in;
 struct in6_addr in6;
};
# 6 "../include/linux/netfilter_defs.h" 2
# 6 "../include/net/netns/netfilter.h" 2

struct proc_dir_entry;
struct nf_logger;
struct nf_queue_handler;

struct netns_nf {

 struct proc_dir_entry *proc_netfilter;

 const struct nf_logger *nf_loggers[NFPROTO_NUMPROTO];

 struct ctl_table_header *nf_log_dir_header;

 struct ctl_table_header *nf_lwtnl_dir_header;


 struct nf_hook_entries *hooks_ipv4[NF_INET_NUMHOOKS];
 struct nf_hook_entries *hooks_ipv6[NF_INET_NUMHOOKS];
# 36 "../include/net/netns/netfilter.h"
};
# 26 "../include/net/net_namespace.h" 2






# 1 "../include/net/netns/nftables.h" 1




struct netns_nftables {
 u8 gencursor;
};
# 33 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/xfrm.h" 1








# 1 "../include/uapi/linux/xfrm.h" 1
# 16 "../include/uapi/linux/xfrm.h"
typedef union {
 __be32 a4;
 __be32 a6[4];
 struct in6_addr in6;
} xfrm_address_t;





struct xfrm_id {
 xfrm_address_t daddr;
 __be32 spi;
 __u8 proto;
};

struct xfrm_sec_ctx {
 __u8 ctx_doi;
 __u8 ctx_alg;
 __u16 ctx_len;
 __u32 ctx_sid;
 char ctx_str[] __attribute__((__counted_by__(ctx_len)));
};
# 50 "../include/uapi/linux/xfrm.h"
struct xfrm_selector {
 xfrm_address_t daddr;
 xfrm_address_t saddr;
 __be16 dport;
 __be16 dport_mask;
 __be16 sport;
 __be16 sport_mask;
 __u16 family;
 __u8 prefixlen_d;
 __u8 prefixlen_s;
 __u8 proto;
 int ifindex;
 __kernel_uid32_t user;
};



struct xfrm_lifetime_cfg {
 __u64 soft_byte_limit;
 __u64 hard_byte_limit;
 __u64 soft_packet_limit;
 __u64 hard_packet_limit;
 __u64 soft_add_expires_seconds;
 __u64 hard_add_expires_seconds;
 __u64 soft_use_expires_seconds;
 __u64 hard_use_expires_seconds;
};

struct xfrm_lifetime_cur {
 __u64 bytes;
 __u64 packets;
 __u64 add_time;
 __u64 use_time;
};

struct xfrm_replay_state {
 __u32 oseq;
 __u32 seq;
 __u32 bitmap;
};



struct xfrm_replay_state_esn {
 unsigned int bmp_len;
 __u32 oseq;
 __u32 seq;
 __u32 oseq_hi;
 __u32 seq_hi;
 __u32 replay_window;
 __u32 bmp[];
};

struct xfrm_algo {
 char alg_name[64];
 unsigned int alg_key_len;
 char alg_key[];
};

struct xfrm_algo_auth {
 char alg_name[64];
 unsigned int alg_key_len;
 unsigned int alg_trunc_len;
 char alg_key[];
};

struct xfrm_algo_aead {
 char alg_name[64];
 unsigned int alg_key_len;
 unsigned int alg_icv_len;
 char alg_key[];
};

struct xfrm_stats {
 __u32 replay_window;
 __u32 replay;
 __u32 integrity_failed;
};

enum {
 XFRM_POLICY_TYPE_MAIN = 0,
 XFRM_POLICY_TYPE_SUB = 1,
 XFRM_POLICY_TYPE_MAX = 2,
 XFRM_POLICY_TYPE_ANY = 255
};

enum {
 XFRM_POLICY_IN = 0,
 XFRM_POLICY_OUT = 1,
 XFRM_POLICY_FWD = 2,
 XFRM_POLICY_MASK = 3,
 XFRM_POLICY_MAX = 3
};

enum xfrm_sa_dir {
 XFRM_SA_DIR_IN = 1,
 XFRM_SA_DIR_OUT = 2
};

enum {
 XFRM_SHARE_ANY,
 XFRM_SHARE_SESSION,
 XFRM_SHARE_USER,
 XFRM_SHARE_UNIQUE
};
# 164 "../include/uapi/linux/xfrm.h"
enum {
 XFRM_MSG_BASE = 0x10,

 XFRM_MSG_NEWSA = 0x10,

 XFRM_MSG_DELSA,

 XFRM_MSG_GETSA,


 XFRM_MSG_NEWPOLICY,

 XFRM_MSG_DELPOLICY,

 XFRM_MSG_GETPOLICY,


 XFRM_MSG_ALLOCSPI,

 XFRM_MSG_ACQUIRE,

 XFRM_MSG_EXPIRE,


 XFRM_MSG_UPDPOLICY,

 XFRM_MSG_UPDSA,


 XFRM_MSG_POLEXPIRE,


 XFRM_MSG_FLUSHSA,

 XFRM_MSG_FLUSHPOLICY,


 XFRM_MSG_NEWAE,

 XFRM_MSG_GETAE,


 XFRM_MSG_REPORT,


 XFRM_MSG_MIGRATE,


 XFRM_MSG_NEWSADINFO,

 XFRM_MSG_GETSADINFO,


 XFRM_MSG_NEWSPDINFO,

 XFRM_MSG_GETSPDINFO,


 XFRM_MSG_MAPPING,


 XFRM_MSG_SETDEFAULT,

 XFRM_MSG_GETDEFAULT,

 __XFRM_MSG_MAX
};
# 239 "../include/uapi/linux/xfrm.h"
struct xfrm_user_sec_ctx {
 __u16 len;
 __u16 exttype;
 __u8 ctx_alg;
 __u8 ctx_doi;
 __u16 ctx_len;
};

struct xfrm_user_tmpl {
 struct xfrm_id id;
 __u16 family;
 xfrm_address_t saddr;
 __u32 reqid;
 __u8 mode;
 __u8 share;
 __u8 optional;
 __u32 aalgos;
 __u32 ealgos;
 __u32 calgos;
};

struct xfrm_encap_tmpl {
 __u16 encap_type;
 __be16 encap_sport;
 __be16 encap_dport;
 xfrm_address_t encap_oa;
};


enum xfrm_ae_ftype_t {
 XFRM_AE_UNSPEC,
 XFRM_AE_RTHR=1,
 XFRM_AE_RVAL=2,
 XFRM_AE_LVAL=4,
 XFRM_AE_ETHR=8,
 XFRM_AE_CR=16,
 XFRM_AE_CE=32,
 XFRM_AE_CU=64,
 __XFRM_AE_MAX


};

struct xfrm_userpolicy_type {
 __u8 type;
 __u16 reserved1;
 __u8 reserved2;
};


enum xfrm_attr_type_t {
 XFRMA_UNSPEC,
 XFRMA_ALG_AUTH,
 XFRMA_ALG_CRYPT,
 XFRMA_ALG_COMP,
 XFRMA_ENCAP,
 XFRMA_TMPL,
 XFRMA_SA,
 XFRMA_POLICY,
 XFRMA_SEC_CTX,
 XFRMA_LTIME_VAL,
 XFRMA_REPLAY_VAL,
 XFRMA_REPLAY_THRESH,
 XFRMA_ETIMER_THRESH,
 XFRMA_SRCADDR,
 XFRMA_COADDR,
 XFRMA_LASTUSED,
 XFRMA_POLICY_TYPE,
 XFRMA_MIGRATE,
 XFRMA_ALG_AEAD,
 XFRMA_KMADDRESS,
 XFRMA_ALG_AUTH_TRUNC,
 XFRMA_MARK,
 XFRMA_TFCPAD,
 XFRMA_REPLAY_ESN_VAL,
 XFRMA_SA_EXTRA_FLAGS,
 XFRMA_PROTO,
 XFRMA_ADDRESS_FILTER,
 XFRMA_PAD,
 XFRMA_OFFLOAD_DEV,
 XFRMA_SET_MARK,
 XFRMA_SET_MARK_MASK,
 XFRMA_IF_ID,
 XFRMA_MTIMER_THRESH,
 XFRMA_SA_DIR,
 XFRMA_NAT_KEEPALIVE_INTERVAL,
 __XFRMA_MAX



};

struct xfrm_mark {
 __u32 v;
 __u32 m;
};

enum xfrm_sadattr_type_t {
 XFRMA_SAD_UNSPEC,
 XFRMA_SAD_CNT,
 XFRMA_SAD_HINFO,
 __XFRMA_SAD_MAX


};

struct xfrmu_sadhinfo {
 __u32 sadhcnt;
 __u32 sadhmcnt;
};

enum xfrm_spdattr_type_t {
 XFRMA_SPD_UNSPEC,
 XFRMA_SPD_INFO,
 XFRMA_SPD_HINFO,
 XFRMA_SPD_IPV4_HTHRESH,
 XFRMA_SPD_IPV6_HTHRESH,
 __XFRMA_SPD_MAX


};

struct xfrmu_spdinfo {
 __u32 incnt;
 __u32 outcnt;
 __u32 fwdcnt;
 __u32 inscnt;
 __u32 outscnt;
 __u32 fwdscnt;
};

struct xfrmu_spdhinfo {
 __u32 spdhcnt;
 __u32 spdhmcnt;
};

struct xfrmu_spdhthresh {
 __u8 lbits;
 __u8 rbits;
};

struct xfrm_usersa_info {
 struct xfrm_selector sel;
 struct xfrm_id id;
 xfrm_address_t saddr;
 struct xfrm_lifetime_cfg lft;
 struct xfrm_lifetime_cur curlft;
 struct xfrm_stats stats;
 __u32 seq;
 __u32 reqid;
 __u16 family;
 __u8 mode;
 __u8 replay_window;
 __u8 flags;
# 401 "../include/uapi/linux/xfrm.h"
};




struct xfrm_usersa_id {
 xfrm_address_t daddr;
 __be32 spi;
 __u16 family;
 __u8 proto;
};

struct xfrm_aevent_id {
 struct xfrm_usersa_id sa_id;
 xfrm_address_t saddr;
 __u32 flags;
 __u32 reqid;
};

struct xfrm_userspi_info {
 struct xfrm_usersa_info info;
 __u32 min;
 __u32 max;
};

struct xfrm_userpolicy_info {
 struct xfrm_selector sel;
 struct xfrm_lifetime_cfg lft;
 struct xfrm_lifetime_cur curlft;
 __u32 priority;
 __u32 index;
 __u8 dir;
 __u8 action;


 __u8 flags;



 __u8 share;
};

struct xfrm_userpolicy_id {
 struct xfrm_selector sel;
 __u32 index;
 __u8 dir;
};

struct xfrm_user_acquire {
 struct xfrm_id id;
 xfrm_address_t saddr;
 struct xfrm_selector sel;
 struct xfrm_userpolicy_info policy;
 __u32 aalgos;
 __u32 ealgos;
 __u32 calgos;
 __u32 seq;
};

struct xfrm_user_expire {
 struct xfrm_usersa_info state;
 __u8 hard;
};

struct xfrm_user_polexpire {
 struct xfrm_userpolicy_info pol;
 __u8 hard;
};

struct xfrm_usersa_flush {
 __u8 proto;
};

struct xfrm_user_report {
 __u8 proto;
 struct xfrm_selector sel;
};



struct xfrm_user_kmaddress {
 xfrm_address_t local;
 xfrm_address_t remote;
 __u32 reserved;
 __u16 family;
};

struct xfrm_user_migrate {
 xfrm_address_t old_daddr;
 xfrm_address_t old_saddr;
 xfrm_address_t new_daddr;
 xfrm_address_t new_saddr;
 __u8 proto;
 __u8 mode;
 __u16 reserved;
 __u32 reqid;
 __u16 old_family;
 __u16 new_family;
};

struct xfrm_user_mapping {
 struct xfrm_usersa_id id;
 __u32 reqid;
 xfrm_address_t old_saddr;
 xfrm_address_t new_saddr;
 __be16 old_sport;
 __be16 new_sport;
};

struct xfrm_address_filter {
 xfrm_address_t saddr;
 xfrm_address_t daddr;
 __u16 family;
 __u8 splen;
 __u8 dplen;
};

struct xfrm_user_offload {
 int ifindex;
 __u8 flags;
};
# 537 "../include/uapi/linux/xfrm.h"
struct xfrm_userpolicy_default {



 __u8 in;
 __u8 fwd;
 __u8 out;
};
# 555 "../include/uapi/linux/xfrm.h"
enum xfrm_nlgroups {
 XFRMNLGRP_NONE,

 XFRMNLGRP_ACQUIRE,

 XFRMNLGRP_EXPIRE,

 XFRMNLGRP_SA,

 XFRMNLGRP_POLICY,

 XFRMNLGRP_AEVENTS,

 XFRMNLGRP_REPORT,

 XFRMNLGRP_MIGRATE,

 XFRMNLGRP_MAPPING,

 __XFRMNLGRP_MAX
};
# 10 "../include/net/netns/xfrm.h" 2


struct ctl_table_header;

struct xfrm_policy_hash {
 struct hlist_head *table;
 unsigned int hmask;
 u8 dbits4;
 u8 sbits4;
 u8 dbits6;
 u8 sbits6;
};

struct xfrm_policy_hthresh {
 struct work_struct work;
 seqlock_t lock;
 u8 lbits4;
 u8 rbits4;
 u8 lbits6;
 u8 rbits6;
};

struct netns_xfrm {
 struct list_head state_all;
# 42 "../include/net/netns/xfrm.h"
 struct hlist_head *state_bydst;
 struct hlist_head *state_bysrc;
 struct hlist_head *state_byspi;
 struct hlist_head *state_byseq;
 unsigned int state_hmask;
 unsigned int state_num;
 struct work_struct state_hash_work;

 struct list_head policy_all;
 struct hlist_head *policy_byidx;
 unsigned int policy_idx_hmask;
 unsigned int idx_generator;
 struct hlist_head policy_inexact[XFRM_POLICY_MAX];
 struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX];
 unsigned int policy_count[XFRM_POLICY_MAX * 2];
 struct work_struct policy_hash_work;
 struct xfrm_policy_hthresh policy_hthresh;
 struct list_head inexact_bins;


 struct sock *nlsk;
 struct sock *nlsk_stash;

 u32 sysctl_aevent_etime;
 u32 sysctl_aevent_rseqth;
 int sysctl_larval_drop;
 u32 sysctl_acq_expires;

 u8 policy_default[XFRM_POLICY_MAX];


 struct ctl_table_header *sysctl_hdr;


 struct dst_ops xfrm4_dst_ops;

 struct dst_ops xfrm6_dst_ops;

 spinlock_t xfrm_state_lock;
 seqcount_spinlock_t xfrm_state_hash_generation;
 seqcount_spinlock_t xfrm_policy_hash_generation;

 spinlock_t xfrm_policy_lock;
 struct mutex xfrm_cfg_mutex;
 struct delayed_work nat_keepalive_work;
};
# 34 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/mpls.h" 1
# 11 "../include/net/netns/mpls.h"
struct mpls_route;
struct ctl_table_header;

struct netns_mpls {
 int ip_ttl_propagate;
 int default_ttl;
 size_t platform_labels;
 struct mpls_route * *platform_label;

 struct ctl_table_header *ctl;
};
# 35 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/can.h" 1
# 12 "../include/net/netns/can.h"
struct can_dev_rcv_lists;
struct can_pkg_stats;
struct can_rcv_lists_stats;

struct netns_can {

 struct proc_dir_entry *proc_dir;
 struct proc_dir_entry *pde_stats;
 struct proc_dir_entry *pde_reset_stats;
 struct proc_dir_entry *pde_rcvlist_all;
 struct proc_dir_entry *pde_rcvlist_fil;
 struct proc_dir_entry *pde_rcvlist_inv;
 struct proc_dir_entry *pde_rcvlist_sff;
 struct proc_dir_entry *pde_rcvlist_eff;
 struct proc_dir_entry *pde_rcvlist_err;
 struct proc_dir_entry *bcmproc_dir;



 struct can_dev_rcv_lists *rx_alldev_list;
 spinlock_t rcvlists_lock;
 struct timer_list stattimer;
 struct can_pkg_stats *pkg_stats;
 struct can_rcv_lists_stats *rcv_lists_stats;


 struct hlist_head cgw_list;
};
# 36 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/xdp.h" 1







struct netns_xdp {
 struct mutex lock;
 struct hlist_head list;
};
# 37 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/smc.h" 1






struct smc_stats_rsn;
struct smc_stats;
struct netns_smc {

 struct smc_stats *smc_stats;

 struct mutex mutex_fback_rsn;
 struct smc_stats_rsn *fback_rsn;

 bool limit_smc_hs;

 struct ctl_table_header *smc_hdr;

 unsigned int sysctl_autocorking_size;
 unsigned int sysctl_smcr_buf_type;
 int sysctl_smcr_testlink_time;
 int sysctl_wmem;
 int sysctl_rmem;
 int sysctl_max_links_per_lgr;
 int sysctl_max_conns_per_lgr;
};
# 38 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/bpf.h" 1
# 11 "../include/net/netns/bpf.h"
struct bpf_prog;
struct bpf_prog_array;

enum netns_bpf_attach_type {
 NETNS_BPF_INVALID = -1,
 NETNS_BPF_FLOW_DISSECTOR = 0,
 NETNS_BPF_SK_LOOKUP,
 MAX_NETNS_BPF_ATTACH_TYPE
};

struct netns_bpf {

 struct bpf_prog_array *run_array[MAX_NETNS_BPF_ATTACH_TYPE];
 struct bpf_prog *progs[MAX_NETNS_BPF_ATTACH_TYPE];
 struct list_head links[MAX_NETNS_BPF_ATTACH_TYPE];
};
# 39 "../include/net/net_namespace.h" 2
# 1 "../include/net/netns/mctp.h" 1
# 12 "../include/net/netns/mctp.h"
struct netns_mctp {

 struct list_head routes;





 struct mutex bind_lock;
 struct hlist_head binds;




 spinlock_t keys_lock;
 struct hlist_head keys;


 unsigned int default_net;


 struct mutex neigh_lock;
 struct list_head neighbours;
};
# 40 "../include/net/net_namespace.h" 2
# 1 "../include/net/net_trackers.h" 1



# 1 "../include/linux/ref_tracker.h" 1






# 1 "../include/linux/stackdepot.h" 1
# 25 "../include/linux/stackdepot.h"
typedef u32 depot_stack_handle_t;
# 44 "../include/linux/stackdepot.h"
union handle_parts {
 depot_stack_handle_t handle;
 struct {
  u32 pool_index_plus_1 : ((sizeof(depot_stack_handle_t) * 8) - (2 + 14 - 4) - 5);
  u32 offset : (2 + 14 - 4);
  u32 extra : 5;
 };
};

struct stack_record {
 struct list_head hash_list;
 u32 hash;
 u32 size;
 union handle_parts handle;
 refcount_t count;
 union {
  unsigned long entries[64];
  struct {
# 72 "../include/linux/stackdepot.h"
   struct list_head free_list;
   unsigned long rcu_state;
  };
 };
};


typedef u32 depot_flags_t;
# 113 "../include/linux/stackdepot.h"
int stack_depot_init(void);

void __attribute__((__section__(".init.text"))) __attribute__((__cold__)) stack_depot_request_early_init(void);


int __attribute__((__section__(".init.text"))) __attribute__((__cold__)) stack_depot_early_init(void);
# 157 "../include/linux/stackdepot.h"
depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
         unsigned int nr_entries,
         gfp_t gfp_flags,
         depot_flags_t depot_flags);
# 177 "../include/linux/stackdepot.h"
depot_stack_handle_t stack_depot_save(unsigned long *entries,
          unsigned int nr_entries, gfp_t gfp_flags);
# 189 "../include/linux/stackdepot.h"
struct stack_record *__stack_depot_get_stack_record(depot_stack_handle_t handle);
# 199 "../include/linux/stackdepot.h"
unsigned int stack_depot_fetch(depot_stack_handle_t handle,
          unsigned long **entries);






void stack_depot_print(depot_stack_handle_t stack);
# 219 "../include/linux/stackdepot.h"
int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
         int spaces);
# 232 "../include/linux/stackdepot.h"
void stack_depot_put(depot_stack_handle_t handle);
# 245 "../include/linux/stackdepot.h"
depot_stack_handle_t __attribute__((__warn_unused_result__)) stack_depot_set_extra_bits(
   depot_stack_handle_t handle, unsigned int extra_bits);
# 255 "../include/linux/stackdepot.h"
unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
# 8 "../include/linux/ref_tracker.h" 2

struct ref_tracker;

struct ref_tracker_dir {

 spinlock_t lock;
 unsigned int quarantine_avail;
 refcount_t untracked;
 refcount_t no_tracker;
 bool dead;
 struct list_head list;
 struct list_head quarantine;
 char name[32];

};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ref_tracker_dir_init(struct ref_tracker_dir *dir,
     unsigned int quarantine_count,
     const char *name)
{
 INIT_LIST_HEAD(&dir->list);
 INIT_LIST_HEAD(&dir->quarantine);
 do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&dir->lock), "&dir->lock", &__key, LD_WAIT_CONFIG); } while (0);
 dir->quarantine_avail = quarantine_count;
 dir->dead = false;
 refcount_set(&dir->untracked, 1);
 refcount_set(&dir->no_tracker, 1);
 sized_strscpy(dir->name, name, sizeof(dir->name));
 stack_depot_init();
}

void ref_tracker_dir_exit(struct ref_tracker_dir *dir);

void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir,
      unsigned int display_limit);

void ref_tracker_dir_print(struct ref_tracker_dir *dir,
      unsigned int display_limit);

int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size);

int ref_tracker_alloc(struct ref_tracker_dir *dir,
        struct ref_tracker **trackerp, gfp_t gfp);

int ref_tracker_free(struct ref_tracker_dir *dir,
       struct ref_tracker **trackerp);
# 5 "../include/net/net_trackers.h" 2


typedef struct ref_tracker *netdevice_tracker;





typedef struct ref_tracker *netns_tracker;
# 41 "../include/net/net_namespace.h" 2






struct user_namespace;
struct proc_dir_entry;
struct net_device;
struct sock;
struct ctl_table_header;
struct net_generic;
struct uevent_sock;
struct netns_ipvs;
struct bpf_prog;





struct net {



 refcount_t passive;


 spinlock_t rules_mod_lock;

 unsigned int dev_base_seq;
 u32 ifindex;

 spinlock_t nsid_lock;
 atomic_t fnhe_genid;

 struct list_head list;
 struct list_head exit_list;





 struct llist_node cleanup_list;


 struct key_tag *key_domain;

 struct user_namespace *user_ns;
 struct ucounts *ucounts;
 struct idr netns_ids;

 struct ns_common ns;
 struct ref_tracker_dir refcnt_tracker;
 struct ref_tracker_dir notrefcnt_tracker;


 struct list_head dev_base_head;
 struct proc_dir_entry *proc_net;
 struct proc_dir_entry *proc_net_stat;


 struct ctl_table_set sysctls;


 struct sock *rtnl;
 struct sock *genl_sock;

 struct uevent_sock *uevent_sock;

 struct hlist_head *dev_name_head;
 struct hlist_head *dev_index_head;
 struct xarray dev_by_index;
 struct raw_notifier_head netdev_chain;




 u32 hash_mix;

 struct net_device *loopback_dev;


 struct list_head rules_ops;

 struct netns_core core;
 struct netns_mib mib;
 struct netns_packet packet;

 struct netns_unix unx;

 struct netns_nexthop nexthop;
 struct netns_ipv4 ipv4;

 struct netns_ipv6 ipv6;





 struct netns_sctp sctp;
# 155 "../include/net/net_namespace.h"
 struct sk_buff_head wext_nlevents;

 struct net_generic *gen;


 struct netns_bpf bpf;



 struct netns_xfrm xfrm;


 u64 net_cookie;





 struct netns_mpls mpls;


 struct netns_can can;


 struct netns_xdp xdp;


 struct netns_mctp mctp;


 struct sock *crypto_nlsk;

 struct sock *diag_nlsk;

 struct netns_smc smc;

} ;

# 1 "../include/linux/seq_file_net.h" 1







struct net;
extern struct net init_net;

struct seq_net_private {

 struct net *net;
 netns_tracker ns_tracker;

};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *seq_file_net(struct seq_file *seq)
{

 return ((struct seq_net_private *)seq->private)->net;



}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *seq_file_single_net(struct seq_file *seq)
{

 return (struct net *)seq->private;



}
# 194 "../include/net/net_namespace.h" 2


extern struct net init_net;


struct net *copy_net_ns(unsigned long flags, struct user_namespace *user_ns,
   struct net *old_net);

void net_ns_get_ownership(const struct net *net, kuid_t *uid, kgid_t *gid);

void net_ns_barrier(void);

struct ns_common *get_net_ns(struct ns_common *ns);
struct net *get_net_ns_by_fd(int fd);
# 240 "../include/net/net_namespace.h"
extern struct list_head net_namespace_list;

struct net *get_net_ns_by_pid(pid_t pid);


void ipx_register_sysctl(void);
void ipx_unregister_sysctl(void);






void __put_net(struct net *net);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *get_net(struct net *net)
{
 refcount_inc(&net->ns.count);
 return net;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *maybe_get_net(struct net *net)
{





 if (!refcount_inc_not_zero(&net->ns.count))
  net = ((void *)0);
 return net;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_net(struct net *net)
{
 if (refcount_dec_and_test(&net->ns.count))
  __put_net(net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int net_eq(const struct net *net1, const struct net *net2)
{
 return net1 == net2;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int check_net(const struct net *net)
{
 return refcount_read(&net->ns.count) != 0;
}

void net_drop_ns(void *);
# 325 "../include/net/net_namespace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netns_tracker_alloc(struct net *net,
      netns_tracker *tracker,
      bool refcounted,
      gfp_t gfp)
{

 ref_tracker_alloc(refcounted ? &net->refcnt_tracker :
           &net->notrefcnt_tracker,
     tracker, gfp);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netns_tracker_alloc(struct net *net, netns_tracker *tracker,
           gfp_t gfp)
{
 __netns_tracker_alloc(net, tracker, true, gfp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netns_tracker_free(struct net *net,
     netns_tracker *tracker,
     bool refcounted)
{

       ref_tracker_free(refcounted ? &net->refcnt_tracker :
         &net->notrefcnt_tracker, tracker);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *get_net_track(struct net *net,
     netns_tracker *tracker, gfp_t gfp)
{
 get_net(net);
 netns_tracker_alloc(net, tracker, gfp);
 return net;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void put_net_track(struct net *net, netns_tracker *tracker)
{
 __netns_tracker_free(net, tracker, true);
 put_net(net);
}

typedef struct {

 struct net *net;

} possible_net_t;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void write_pnet(possible_net_t *pnet, struct net *net)
{

 do { uintptr_t _r_a_p__v = (uintptr_t)(net); ; if (__builtin_constant_p(net) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_332(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((pnet->net)) == sizeof(char) || sizeof((pnet->net)) == sizeof(short) || sizeof((pnet->net)) == sizeof(int) || sizeof((pnet->net)) == sizeof(long)) || sizeof((pnet->net)) == sizeof(long long))) __compiletime_assert_332(); } while (0); do { *(volatile typeof((pnet->net)) *)&((pnet->net)) = ((typeof(pnet->net))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_333(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&pnet->net) == sizeof(char) || sizeof(*&pnet->net) == sizeof(short) || sizeof(*&pnet->net) == sizeof(int) || sizeof(*&pnet->net) == sizeof(long)) || sizeof(*&pnet->net) == sizeof(long long))) __compiletime_assert_333(); } while (0); do { *(volatile typeof(*&pnet->net) *)&(*&pnet->net) = ((typeof(*((typeof(pnet->net))_r_a_p__v)) *)((typeof(pnet->net))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *read_pnet(const possible_net_t *pnet)
{

 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((true))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/net_namespace.h", 383, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(pnet->net)) *)((pnet->net))); });



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *read_pnet_rcu(possible_net_t *pnet)
{

 return ({ typeof(*(pnet->net)) *__UNIQUE_ID_rcu334 = (typeof(*(pnet->net)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_335(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((pnet->net)) == sizeof(char) || sizeof((pnet->net)) == sizeof(short) || sizeof((pnet->net)) == sizeof(int) || sizeof((pnet->net)) == sizeof(long)) || sizeof((pnet->net)) == sizeof(long long))) __compiletime_assert_335(); } while (0); (*(const volatile typeof( _Generic(((pnet->net)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((pnet->net)))) *)&((pnet->net))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/net_namespace.h", 392, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(pnet->net)) *)(__UNIQUE_ID_rcu334)); });



}
# 418 "../include/net/net_namespace.h"
int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp);
int peernet2id(const struct net *net, struct net *peer);
bool peernet_has_id(const struct net *net, struct net *peer);
struct net *get_net_ns_by_id(const struct net *net, int id);

struct pernet_operations {
 struct list_head list;
# 447 "../include/net/net_namespace.h"
 int (*init)(struct net *net);
 void (*pre_exit)(struct net *net);
 void (*exit)(struct net *net);
 void (*exit_batch)(struct list_head *net_exit_list);

 void (*exit_batch_rtnl)(struct list_head *net_exit_list,
    struct list_head *dev_kill_list);
 unsigned int *id;
 size_t size;
};
# 477 "../include/net/net_namespace.h"
int register_pernet_subsys(struct pernet_operations *);
void unregister_pernet_subsys(struct pernet_operations *);
int register_pernet_device(struct pernet_operations *);
void unregister_pernet_device(struct pernet_operations *);

struct ctl_table;




int net_sysctl_init(void);
struct ctl_table_header *register_net_sysctl_sz(struct net *net, const char *path,
          struct ctl_table *table, size_t table_size);
void unregister_net_sysctl_table(struct ctl_table_header *header);
# 503 "../include/net/net_namespace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rt_genid_ipv4(const struct net *net)
{
 return atomic_read(&net->ipv4.rt_genid);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rt_genid_ipv6(const struct net *net)
{
 return atomic_read(&net->ipv6.fib6_sernum);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rt_genid_bump_ipv4(struct net *net)
{
 atomic_inc(&net->ipv4.rt_genid);
}

extern void (*__fib6_flush_trees)(struct net *net);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rt_genid_bump_ipv6(struct net *net)
{
 if (__fib6_flush_trees)
  __fib6_flush_trees(net);
}
# 536 "../include/net/net_namespace.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rt_genid_bump_all(struct net *net)
{
 rt_genid_bump_ipv4(net);
 rt_genid_bump_ipv6(net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fnhe_genid(const struct net *net)
{
 return atomic_read(&net->fnhe_genid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fnhe_genid_bump(struct net *net)
{
 atomic_inc(&net->fnhe_genid);
}


void net_ns_init(void);
# 39 "../include/linux/netdevice.h" 2



# 1 "../include/net/netprio_cgroup.h" 1
# 44 "../include/net/netprio_cgroup.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 task_netprioidx(struct task_struct *p)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_update_netprioidx(struct sock_cgroup_data *skcd)
{
}
# 43 "../include/linux/netdevice.h" 2


# 1 "../include/uapi/linux/neighbour.h" 1







struct ndmsg {
 __u8 ndm_family;
 __u8 ndm_pad1;
 __u16 ndm_pad2;
 __s32 ndm_ifindex;
 __u16 ndm_state;
 __u8 ndm_flags;
 __u8 ndm_type;
};

enum {
 NDA_UNSPEC,
 NDA_DST,
 NDA_LLADDR,
 NDA_CACHEINFO,
 NDA_PROBES,
 NDA_VLAN,
 NDA_PORT,
 NDA_VNI,
 NDA_IFINDEX,
 NDA_MASTER,
 NDA_LINK_NETNSID,
 NDA_SRC_VNI,
 NDA_PROTOCOL,
 NDA_NH_ID,
 NDA_FDB_EXT_ATTRS,
 NDA_FLAGS_EXT,
 NDA_NDM_STATE_MASK,
 NDA_NDM_FLAGS_MASK,
 __NDA_MAX
};
# 97 "../include/uapi/linux/neighbour.h"
struct nda_cacheinfo {
 __u32 ndm_confirmed;
 __u32 ndm_used;
 __u32 ndm_updated;
 __u32 ndm_refcnt;
};
# 129 "../include/uapi/linux/neighbour.h"
struct ndt_stats {
 __u64 ndts_allocs;
 __u64 ndts_destroys;
 __u64 ndts_hash_grows;
 __u64 ndts_res_failed;
 __u64 ndts_lookups;
 __u64 ndts_hits;
 __u64 ndts_rcv_probes_mcast;
 __u64 ndts_rcv_probes_ucast;
 __u64 ndts_periodic_gc_runs;
 __u64 ndts_forced_gc_runs;
 __u64 ndts_table_fulls;
};

enum {
 NDTPA_UNSPEC,
 NDTPA_IFINDEX,
 NDTPA_REFCNT,
 NDTPA_REACHABLE_TIME,
 NDTPA_BASE_REACHABLE_TIME,
 NDTPA_RETRANS_TIME,
 NDTPA_GC_STALETIME,
 NDTPA_DELAY_PROBE_TIME,
 NDTPA_QUEUE_LEN,
 NDTPA_APP_PROBES,
 NDTPA_UCAST_PROBES,
 NDTPA_MCAST_PROBES,
 NDTPA_ANYCAST_DELAY,
 NDTPA_PROXY_DELAY,
 NDTPA_PROXY_QLEN,
 NDTPA_LOCKTIME,
 NDTPA_QUEUE_LENBYTES,
 NDTPA_MCAST_REPROBES,
 NDTPA_PAD,
 NDTPA_INTERVAL_PROBE_TIME_MS,
 __NDTPA_MAX
};


struct ndtmsg {
 __u8 ndtm_family;
 __u8 ndtm_pad1;
 __u16 ndtm_pad2;
};

struct ndt_config {
 __u16 ndtc_key_len;
 __u16 ndtc_entry_size;
 __u32 ndtc_entries;
 __u32 ndtc_last_flush;
 __u32 ndtc_last_rand;
 __u32 ndtc_hash_rnd;
 __u32 ndtc_hash_mask;
 __u32 ndtc_hash_chain_gc;
 __u32 ndtc_proxy_qlen;
};

enum {
 NDTA_UNSPEC,
 NDTA_NAME,
 NDTA_THRESH1,
 NDTA_THRESH2,
 NDTA_THRESH3,
 NDTA_CONFIG,
 NDTA_PARMS,
 NDTA_STATS,
 NDTA_GC_INTERVAL,
 NDTA_PAD,
 __NDTA_MAX
};






enum {
 FDB_NOTIFY_BIT = (1 << 0),
 FDB_NOTIFY_INACTIVE_BIT = (1 << 1)
};







enum {
 NFEA_UNSPEC,
 NFEA_ACTIVITY_NOTIFY,
 NFEA_DONT_REFRESH,
 __NFEA_MAX
};
# 46 "../include/linux/netdevice.h" 2

# 1 "../include/uapi/linux/netdevice.h" 1
# 32 "../include/uapi/linux/netdevice.h"
# 1 "../include/linux/if_link.h" 1




# 1 "../include/uapi/linux/if_link.h" 1








struct rtnl_link_stats {
 __u32 rx_packets;
 __u32 tx_packets;
 __u32 rx_bytes;
 __u32 tx_bytes;
 __u32 rx_errors;
 __u32 tx_errors;
 __u32 rx_dropped;
 __u32 tx_dropped;
 __u32 multicast;
 __u32 collisions;

 __u32 rx_length_errors;
 __u32 rx_over_errors;
 __u32 rx_crc_errors;
 __u32 rx_frame_errors;
 __u32 rx_fifo_errors;
 __u32 rx_missed_errors;


 __u32 tx_aborted_errors;
 __u32 tx_carrier_errors;
 __u32 tx_fifo_errors;
 __u32 tx_heartbeat_errors;
 __u32 tx_window_errors;


 __u32 rx_compressed;
 __u32 tx_compressed;

 __u32 rx_nohandler;
};
# 218 "../include/uapi/linux/if_link.h"
struct rtnl_link_stats64 {
 __u64 rx_packets;
 __u64 tx_packets;
 __u64 rx_bytes;
 __u64 tx_bytes;
 __u64 rx_errors;
 __u64 tx_errors;
 __u64 rx_dropped;
 __u64 tx_dropped;
 __u64 multicast;
 __u64 collisions;


 __u64 rx_length_errors;
 __u64 rx_over_errors;
 __u64 rx_crc_errors;
 __u64 rx_frame_errors;
 __u64 rx_fifo_errors;
 __u64 rx_missed_errors;


 __u64 tx_aborted_errors;
 __u64 tx_carrier_errors;
 __u64 tx_fifo_errors;
 __u64 tx_heartbeat_errors;
 __u64 tx_window_errors;


 __u64 rx_compressed;
 __u64 tx_compressed;
 __u64 rx_nohandler;

 __u64 rx_otherhost_dropped;
};




struct rtnl_hw_stats64 {
 __u64 rx_packets;
 __u64 tx_packets;
 __u64 rx_bytes;
 __u64 tx_bytes;
 __u64 rx_errors;
 __u64 tx_errors;
 __u64 rx_dropped;
 __u64 tx_dropped;
 __u64 multicast;
};


struct rtnl_link_ifmap {
 __u64 mem_start;
 __u64 mem_end;
 __u64 base_addr;
 __u16 irq;
 __u8 dma;
 __u8 port;
};
# 296 "../include/uapi/linux/if_link.h"
enum {
 IFLA_UNSPEC,
 IFLA_ADDRESS,
 IFLA_BROADCAST,
 IFLA_IFNAME,
 IFLA_MTU,
 IFLA_LINK,
 IFLA_QDISC,
 IFLA_STATS,
 IFLA_COST,

 IFLA_PRIORITY,

 IFLA_MASTER,

 IFLA_WIRELESS,

 IFLA_PROTINFO,

 IFLA_TXQLEN,

 IFLA_MAP,

 IFLA_WEIGHT,

 IFLA_OPERSTATE,
 IFLA_LINKMODE,
 IFLA_LINKINFO,

 IFLA_NET_NS_PID,
 IFLA_IFALIAS,
 IFLA_NUM_VF,
 IFLA_VFINFO_LIST,
 IFLA_STATS64,
 IFLA_VF_PORTS,
 IFLA_PORT_SELF,
 IFLA_AF_SPEC,
 IFLA_GROUP,
 IFLA_NET_NS_FD,
 IFLA_EXT_MASK,
 IFLA_PROMISCUITY,

 IFLA_NUM_TX_QUEUES,
 IFLA_NUM_RX_QUEUES,
 IFLA_CARRIER,
 IFLA_PHYS_PORT_ID,
 IFLA_CARRIER_CHANGES,
 IFLA_PHYS_SWITCH_ID,
 IFLA_LINK_NETNSID,
 IFLA_PHYS_PORT_NAME,
 IFLA_PROTO_DOWN,
 IFLA_GSO_MAX_SEGS,
 IFLA_GSO_MAX_SIZE,
 IFLA_PAD,
 IFLA_XDP,
 IFLA_EVENT,
 IFLA_NEW_NETNSID,
 IFLA_IF_NETNSID,
 IFLA_TARGET_NETNSID = IFLA_IF_NETNSID,
 IFLA_CARRIER_UP_COUNT,
 IFLA_CARRIER_DOWN_COUNT,
 IFLA_NEW_IFINDEX,
 IFLA_MIN_MTU,
 IFLA_MAX_MTU,
 IFLA_PROP_LIST,
 IFLA_ALT_IFNAME,
 IFLA_PERM_ADDRESS,
 IFLA_PROTO_DOWN_REASON,




 IFLA_PARENT_DEV_NAME,
 IFLA_PARENT_DEV_BUS_NAME,
 IFLA_GRO_MAX_SIZE,
 IFLA_TSO_MAX_SIZE,
 IFLA_TSO_MAX_SEGS,
 IFLA_ALLMULTI,

 IFLA_DEVLINK_PORT,

 IFLA_GSO_IPV4_MAX_SIZE,
 IFLA_GRO_IPV4_MAX_SIZE,
 IFLA_DPLL_PIN,
 __IFLA_MAX
};




enum {
 IFLA_PROTO_DOWN_REASON_UNSPEC,
 IFLA_PROTO_DOWN_REASON_MASK,
 IFLA_PROTO_DOWN_REASON_VALUE,

 __IFLA_PROTO_DOWN_REASON_CNT,
 IFLA_PROTO_DOWN_REASON_MAX = __IFLA_PROTO_DOWN_REASON_CNT - 1
};







enum {
 IFLA_INET_UNSPEC,
 IFLA_INET_CONF,
 __IFLA_INET_MAX,
};
# 439 "../include/uapi/linux/if_link.h"
enum {
 IFLA_INET6_UNSPEC,
 IFLA_INET6_FLAGS,
 IFLA_INET6_CONF,
 IFLA_INET6_STATS,
 IFLA_INET6_MCAST,
 IFLA_INET6_CACHEINFO,
 IFLA_INET6_ICMP6STATS,
 IFLA_INET6_TOKEN,
 IFLA_INET6_ADDR_GEN_MODE,
 IFLA_INET6_RA_MTU,
 __IFLA_INET6_MAX
};



enum in6_addr_gen_mode {
 IN6_ADDR_GEN_MODE_EUI64,
 IN6_ADDR_GEN_MODE_NONE,
 IN6_ADDR_GEN_MODE_STABLE_PRIVACY,
 IN6_ADDR_GEN_MODE_RANDOM,
};
# 744 "../include/uapi/linux/if_link.h"
enum {
 IFLA_BR_UNSPEC,
 IFLA_BR_FORWARD_DELAY,
 IFLA_BR_HELLO_TIME,
 IFLA_BR_MAX_AGE,
 IFLA_BR_AGEING_TIME,
 IFLA_BR_STP_STATE,
 IFLA_BR_PRIORITY,
 IFLA_BR_VLAN_FILTERING,
 IFLA_BR_VLAN_PROTOCOL,
 IFLA_BR_GROUP_FWD_MASK,
 IFLA_BR_ROOT_ID,
 IFLA_BR_BRIDGE_ID,
 IFLA_BR_ROOT_PORT,
 IFLA_BR_ROOT_PATH_COST,
 IFLA_BR_TOPOLOGY_CHANGE,
 IFLA_BR_TOPOLOGY_CHANGE_DETECTED,
 IFLA_BR_HELLO_TIMER,
 IFLA_BR_TCN_TIMER,
 IFLA_BR_TOPOLOGY_CHANGE_TIMER,
 IFLA_BR_GC_TIMER,
 IFLA_BR_GROUP_ADDR,
 IFLA_BR_FDB_FLUSH,
 IFLA_BR_MCAST_ROUTER,
 IFLA_BR_MCAST_SNOOPING,
 IFLA_BR_MCAST_QUERY_USE_IFADDR,
 IFLA_BR_MCAST_QUERIER,
 IFLA_BR_MCAST_HASH_ELASTICITY,
 IFLA_BR_MCAST_HASH_MAX,
 IFLA_BR_MCAST_LAST_MEMBER_CNT,
 IFLA_BR_MCAST_STARTUP_QUERY_CNT,
 IFLA_BR_MCAST_LAST_MEMBER_INTVL,
 IFLA_BR_MCAST_MEMBERSHIP_INTVL,
 IFLA_BR_MCAST_QUERIER_INTVL,
 IFLA_BR_MCAST_QUERY_INTVL,
 IFLA_BR_MCAST_QUERY_RESPONSE_INTVL,
 IFLA_BR_MCAST_STARTUP_QUERY_INTVL,
 IFLA_BR_NF_CALL_IPTABLES,
 IFLA_BR_NF_CALL_IP6TABLES,
 IFLA_BR_NF_CALL_ARPTABLES,
 IFLA_BR_VLAN_DEFAULT_PVID,
 IFLA_BR_PAD,
 IFLA_BR_VLAN_STATS_ENABLED,
 IFLA_BR_MCAST_STATS_ENABLED,
 IFLA_BR_MCAST_IGMP_VERSION,
 IFLA_BR_MCAST_MLD_VERSION,
 IFLA_BR_VLAN_STATS_PER_PORT,
 IFLA_BR_MULTI_BOOLOPT,
 IFLA_BR_MCAST_QUERIER_STATE,
 IFLA_BR_FDB_N_LEARNED,
 IFLA_BR_FDB_MAX_LEARNED,
 __IFLA_BR_MAX,
};



struct ifla_bridge_id {
 __u8 prio[2];
 __u8 addr[6];
};
# 815 "../include/uapi/linux/if_link.h"
enum {
 BRIDGE_MODE_UNSPEC,
 BRIDGE_MODE_HAIRPIN,
};
# 1051 "../include/uapi/linux/if_link.h"
enum {
 IFLA_BRPORT_UNSPEC,
 IFLA_BRPORT_STATE,
 IFLA_BRPORT_PRIORITY,
 IFLA_BRPORT_COST,
 IFLA_BRPORT_MODE,
 IFLA_BRPORT_GUARD,
 IFLA_BRPORT_PROTECT,
 IFLA_BRPORT_FAST_LEAVE,
 IFLA_BRPORT_LEARNING,
 IFLA_BRPORT_UNICAST_FLOOD,
 IFLA_BRPORT_PROXYARP,
 IFLA_BRPORT_LEARNING_SYNC,
 IFLA_BRPORT_PROXYARP_WIFI,
 IFLA_BRPORT_ROOT_ID,
 IFLA_BRPORT_BRIDGE_ID,
 IFLA_BRPORT_DESIGNATED_PORT,
 IFLA_BRPORT_DESIGNATED_COST,
 IFLA_BRPORT_ID,
 IFLA_BRPORT_NO,
 IFLA_BRPORT_TOPOLOGY_CHANGE_ACK,
 IFLA_BRPORT_CONFIG_PENDING,
 IFLA_BRPORT_MESSAGE_AGE_TIMER,
 IFLA_BRPORT_FORWARD_DELAY_TIMER,
 IFLA_BRPORT_HOLD_TIMER,
 IFLA_BRPORT_FLUSH,
 IFLA_BRPORT_MULTICAST_ROUTER,
 IFLA_BRPORT_PAD,
 IFLA_BRPORT_MCAST_FLOOD,
 IFLA_BRPORT_MCAST_TO_UCAST,
 IFLA_BRPORT_VLAN_TUNNEL,
 IFLA_BRPORT_BCAST_FLOOD,
 IFLA_BRPORT_GROUP_FWD_MASK,
 IFLA_BRPORT_NEIGH_SUPPRESS,
 IFLA_BRPORT_ISOLATED,
 IFLA_BRPORT_BACKUP_PORT,
 IFLA_BRPORT_MRP_RING_OPEN,
 IFLA_BRPORT_MRP_IN_OPEN,
 IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT,
 IFLA_BRPORT_MCAST_EHT_HOSTS_CNT,
 IFLA_BRPORT_LOCKED,
 IFLA_BRPORT_MAB,
 IFLA_BRPORT_MCAST_N_GROUPS,
 IFLA_BRPORT_MCAST_MAX_GROUPS,
 IFLA_BRPORT_NEIGH_VLAN_SUPPRESS,
 IFLA_BRPORT_BACKUP_NHID,
 __IFLA_BRPORT_MAX
};


struct ifla_cacheinfo {
 __u32 max_reasm_len;
 __u32 tstamp;
 __u32 reachable_time;
 __u32 retrans_time;
};

enum {
 IFLA_INFO_UNSPEC,
 IFLA_INFO_KIND,
 IFLA_INFO_DATA,
 IFLA_INFO_XSTATS,
 IFLA_INFO_SLAVE_KIND,
 IFLA_INFO_SLAVE_DATA,
 __IFLA_INFO_MAX,
};





enum {
 IFLA_VLAN_UNSPEC,
 IFLA_VLAN_ID,
 IFLA_VLAN_FLAGS,
 IFLA_VLAN_EGRESS_QOS,
 IFLA_VLAN_INGRESS_QOS,
 IFLA_VLAN_PROTOCOL,
 __IFLA_VLAN_MAX,
};



struct ifla_vlan_flags {
 __u32 flags;
 __u32 mask;
};

enum {
 IFLA_VLAN_QOS_UNSPEC,
 IFLA_VLAN_QOS_MAPPING,
 __IFLA_VLAN_QOS_MAX
};



struct ifla_vlan_qos_mapping {
 __u32 from;
 __u32 to;
};


enum {
 IFLA_MACVLAN_UNSPEC,
 IFLA_MACVLAN_MODE,
 IFLA_MACVLAN_FLAGS,
 IFLA_MACVLAN_MACADDR_MODE,
 IFLA_MACVLAN_MACADDR,
 IFLA_MACVLAN_MACADDR_DATA,
 IFLA_MACVLAN_MACADDR_COUNT,
 IFLA_MACVLAN_BC_QUEUE_LEN,
 IFLA_MACVLAN_BC_QUEUE_LEN_USED,
 IFLA_MACVLAN_BC_CUTOFF,
 __IFLA_MACVLAN_MAX,
};



enum macvlan_mode {
 MACVLAN_MODE_PRIVATE = 1,
 MACVLAN_MODE_VEPA = 2,
 MACVLAN_MODE_BRIDGE = 4,
 MACVLAN_MODE_PASSTHRU = 8,
 MACVLAN_MODE_SOURCE = 16,
};

enum macvlan_macaddr_mode {
 MACVLAN_MACADDR_ADD,
 MACVLAN_MACADDR_DEL,
 MACVLAN_MACADDR_FLUSH,
 MACVLAN_MACADDR_SET,
};





enum {
 IFLA_VRF_UNSPEC,
 IFLA_VRF_TABLE,
 __IFLA_VRF_MAX
};



enum {
 IFLA_VRF_PORT_UNSPEC,
 IFLA_VRF_PORT_TABLE,
 __IFLA_VRF_PORT_MAX
};




enum {
 IFLA_MACSEC_UNSPEC,
 IFLA_MACSEC_SCI,
 IFLA_MACSEC_PORT,
 IFLA_MACSEC_ICV_LEN,
 IFLA_MACSEC_CIPHER_SUITE,
 IFLA_MACSEC_WINDOW,
 IFLA_MACSEC_ENCODING_SA,
 IFLA_MACSEC_ENCRYPT,
 IFLA_MACSEC_PROTECT,
 IFLA_MACSEC_INC_SCI,
 IFLA_MACSEC_ES,
 IFLA_MACSEC_SCB,
 IFLA_MACSEC_REPLAY_PROTECT,
 IFLA_MACSEC_VALIDATION,
 IFLA_MACSEC_PAD,
 IFLA_MACSEC_OFFLOAD,
 __IFLA_MACSEC_MAX,
};




enum {
 IFLA_XFRM_UNSPEC,
 IFLA_XFRM_LINK,
 IFLA_XFRM_IF_ID,
 IFLA_XFRM_COLLECT_METADATA,
 __IFLA_XFRM_MAX
};



enum macsec_validation_type {
 MACSEC_VALIDATE_DISABLED = 0,
 MACSEC_VALIDATE_CHECK = 1,
 MACSEC_VALIDATE_STRICT = 2,
 __MACSEC_VALIDATE_END,
 MACSEC_VALIDATE_MAX = __MACSEC_VALIDATE_END - 1,
};

enum macsec_offload {
 MACSEC_OFFLOAD_OFF = 0,
 MACSEC_OFFLOAD_PHY = 1,
 MACSEC_OFFLOAD_MAC = 2,
 __MACSEC_OFFLOAD_END,
 MACSEC_OFFLOAD_MAX = __MACSEC_OFFLOAD_END - 1,
};


enum {
 IFLA_IPVLAN_UNSPEC,
 IFLA_IPVLAN_MODE,
 IFLA_IPVLAN_FLAGS,
 __IFLA_IPVLAN_MAX
};



enum ipvlan_mode {
 IPVLAN_MODE_L2 = 0,
 IPVLAN_MODE_L3,
 IPVLAN_MODE_L3S,
 IPVLAN_MODE_MAX
};





struct tunnel_msg {
 __u8 family;
 __u8 flags;
 __u16 reserved2;
 __u32 ifindex;
};


enum netkit_action {
 NETKIT_NEXT = -1,
 NETKIT_PASS = 0,
 NETKIT_DROP = 2,
 NETKIT_REDIRECT = 7,
};

enum netkit_mode {
 NETKIT_L2,
 NETKIT_L3,
};

enum {
 IFLA_NETKIT_UNSPEC,
 IFLA_NETKIT_PEER_INFO,
 IFLA_NETKIT_PRIMARY,
 IFLA_NETKIT_POLICY,
 IFLA_NETKIT_PEER_POLICY,
 IFLA_NETKIT_MODE,
 __IFLA_NETKIT_MAX,
};
# 1314 "../include/uapi/linux/if_link.h"
enum {
 VNIFILTER_ENTRY_STATS_UNSPEC,
 VNIFILTER_ENTRY_STATS_RX_BYTES,
 VNIFILTER_ENTRY_STATS_RX_PKTS,
 VNIFILTER_ENTRY_STATS_RX_DROPS,
 VNIFILTER_ENTRY_STATS_RX_ERRORS,
 VNIFILTER_ENTRY_STATS_TX_BYTES,
 VNIFILTER_ENTRY_STATS_TX_PKTS,
 VNIFILTER_ENTRY_STATS_TX_DROPS,
 VNIFILTER_ENTRY_STATS_TX_ERRORS,
 VNIFILTER_ENTRY_STATS_PAD,
 __VNIFILTER_ENTRY_STATS_MAX
};


enum {
 VXLAN_VNIFILTER_ENTRY_UNSPEC,
 VXLAN_VNIFILTER_ENTRY_START,
 VXLAN_VNIFILTER_ENTRY_END,
 VXLAN_VNIFILTER_ENTRY_GROUP,
 VXLAN_VNIFILTER_ENTRY_GROUP6,
 VXLAN_VNIFILTER_ENTRY_STATS,
 __VXLAN_VNIFILTER_ENTRY_MAX
};


enum {
 VXLAN_VNIFILTER_UNSPEC,
 VXLAN_VNIFILTER_ENTRY,
 __VXLAN_VNIFILTER_MAX
};


enum {
 IFLA_VXLAN_UNSPEC,
 IFLA_VXLAN_ID,
 IFLA_VXLAN_GROUP,
 IFLA_VXLAN_LINK,
 IFLA_VXLAN_LOCAL,
 IFLA_VXLAN_TTL,
 IFLA_VXLAN_TOS,
 IFLA_VXLAN_LEARNING,
 IFLA_VXLAN_AGEING,
 IFLA_VXLAN_LIMIT,
 IFLA_VXLAN_PORT_RANGE,
 IFLA_VXLAN_PROXY,
 IFLA_VXLAN_RSC,
 IFLA_VXLAN_L2MISS,
 IFLA_VXLAN_L3MISS,
 IFLA_VXLAN_PORT,
 IFLA_VXLAN_GROUP6,
 IFLA_VXLAN_LOCAL6,
 IFLA_VXLAN_UDP_CSUM,
 IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
 IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
 IFLA_VXLAN_REMCSUM_TX,
 IFLA_VXLAN_REMCSUM_RX,
 IFLA_VXLAN_GBP,
 IFLA_VXLAN_REMCSUM_NOPARTIAL,
 IFLA_VXLAN_COLLECT_METADATA,
 IFLA_VXLAN_LABEL,
 IFLA_VXLAN_GPE,
 IFLA_VXLAN_TTL_INHERIT,
 IFLA_VXLAN_DF,
 IFLA_VXLAN_VNIFILTER,
 IFLA_VXLAN_LOCALBYPASS,
 IFLA_VXLAN_LABEL_POLICY,
 __IFLA_VXLAN_MAX
};


struct ifla_vxlan_port_range {
 __be16 low;
 __be16 high;
};

enum ifla_vxlan_df {
 VXLAN_DF_UNSET = 0,
 VXLAN_DF_SET,
 VXLAN_DF_INHERIT,
 __VXLAN_DF_END,
 VXLAN_DF_MAX = __VXLAN_DF_END - 1,
};

enum ifla_vxlan_label_policy {
 VXLAN_LABEL_FIXED = 0,
 VXLAN_LABEL_INHERIT = 1,
 __VXLAN_LABEL_END,
 VXLAN_LABEL_MAX = __VXLAN_LABEL_END - 1,
};


enum {
 IFLA_GENEVE_UNSPEC,
 IFLA_GENEVE_ID,
 IFLA_GENEVE_REMOTE,
 IFLA_GENEVE_TTL,
 IFLA_GENEVE_TOS,
 IFLA_GENEVE_PORT,
 IFLA_GENEVE_COLLECT_METADATA,
 IFLA_GENEVE_REMOTE6,
 IFLA_GENEVE_UDP_CSUM,
 IFLA_GENEVE_UDP_ZERO_CSUM6_TX,
 IFLA_GENEVE_UDP_ZERO_CSUM6_RX,
 IFLA_GENEVE_LABEL,
 IFLA_GENEVE_TTL_INHERIT,
 IFLA_GENEVE_DF,
 IFLA_GENEVE_INNER_PROTO_INHERIT,
 __IFLA_GENEVE_MAX
};


enum ifla_geneve_df {
 GENEVE_DF_UNSET = 0,
 GENEVE_DF_SET,
 GENEVE_DF_INHERIT,
 __GENEVE_DF_END,
 GENEVE_DF_MAX = __GENEVE_DF_END - 1,
};


enum {
 IFLA_BAREUDP_UNSPEC,
 IFLA_BAREUDP_PORT,
 IFLA_BAREUDP_ETHERTYPE,
 IFLA_BAREUDP_SRCPORT_MIN,
 IFLA_BAREUDP_MULTIPROTO_MODE,
 __IFLA_BAREUDP_MAX
};




enum {
 IFLA_PPP_UNSPEC,
 IFLA_PPP_DEV_FD,
 __IFLA_PPP_MAX
};




enum ifla_gtp_role {
 GTP_ROLE_GGSN = 0,
 GTP_ROLE_SGSN,
};

enum {
 IFLA_GTP_UNSPEC,
 IFLA_GTP_FD0,
 IFLA_GTP_FD1,
 IFLA_GTP_PDP_HASHSIZE,
 IFLA_GTP_ROLE,
 IFLA_GTP_CREATE_SOCKETS,
 IFLA_GTP_RESTART_COUNT,
 IFLA_GTP_LOCAL,
 IFLA_GTP_LOCAL6,
 __IFLA_GTP_MAX,
};




enum {
 IFLA_BOND_UNSPEC,
 IFLA_BOND_MODE,
 IFLA_BOND_ACTIVE_SLAVE,
 IFLA_BOND_MIIMON,
 IFLA_BOND_UPDELAY,
 IFLA_BOND_DOWNDELAY,
 IFLA_BOND_USE_CARRIER,
 IFLA_BOND_ARP_INTERVAL,
 IFLA_BOND_ARP_IP_TARGET,
 IFLA_BOND_ARP_VALIDATE,
 IFLA_BOND_ARP_ALL_TARGETS,
 IFLA_BOND_PRIMARY,
 IFLA_BOND_PRIMARY_RESELECT,
 IFLA_BOND_FAIL_OVER_MAC,
 IFLA_BOND_XMIT_HASH_POLICY,
 IFLA_BOND_RESEND_IGMP,
 IFLA_BOND_NUM_PEER_NOTIF,
 IFLA_BOND_ALL_SLAVES_ACTIVE,
 IFLA_BOND_MIN_LINKS,
 IFLA_BOND_LP_INTERVAL,
 IFLA_BOND_PACKETS_PER_SLAVE,
 IFLA_BOND_AD_LACP_RATE,
 IFLA_BOND_AD_SELECT,
 IFLA_BOND_AD_INFO,
 IFLA_BOND_AD_ACTOR_SYS_PRIO,
 IFLA_BOND_AD_USER_PORT_KEY,
 IFLA_BOND_AD_ACTOR_SYSTEM,
 IFLA_BOND_TLB_DYNAMIC_LB,
 IFLA_BOND_PEER_NOTIF_DELAY,
 IFLA_BOND_AD_LACP_ACTIVE,
 IFLA_BOND_MISSED_MAX,
 IFLA_BOND_NS_IP6_TARGET,
 IFLA_BOND_COUPLED_CONTROL,
 __IFLA_BOND_MAX,
};



enum {
 IFLA_BOND_AD_INFO_UNSPEC,
 IFLA_BOND_AD_INFO_AGGREGATOR,
 IFLA_BOND_AD_INFO_NUM_PORTS,
 IFLA_BOND_AD_INFO_ACTOR_KEY,
 IFLA_BOND_AD_INFO_PARTNER_KEY,
 IFLA_BOND_AD_INFO_PARTNER_MAC,
 __IFLA_BOND_AD_INFO_MAX,
};



enum {
 IFLA_BOND_SLAVE_UNSPEC,
 IFLA_BOND_SLAVE_STATE,
 IFLA_BOND_SLAVE_MII_STATUS,
 IFLA_BOND_SLAVE_LINK_FAILURE_COUNT,
 IFLA_BOND_SLAVE_PERM_HWADDR,
 IFLA_BOND_SLAVE_QUEUE_ID,
 IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
 IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
 IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
 IFLA_BOND_SLAVE_PRIO,
 __IFLA_BOND_SLAVE_MAX,
};





enum {
 IFLA_VF_INFO_UNSPEC,
 IFLA_VF_INFO,
 __IFLA_VF_INFO_MAX,
};



enum {
 IFLA_VF_UNSPEC,
 IFLA_VF_MAC,
 IFLA_VF_VLAN,
 IFLA_VF_TX_RATE,
 IFLA_VF_SPOOFCHK,
 IFLA_VF_LINK_STATE,
 IFLA_VF_RATE,
 IFLA_VF_RSS_QUERY_EN,


 IFLA_VF_STATS,
 IFLA_VF_TRUST,
 IFLA_VF_IB_NODE_GUID,
 IFLA_VF_IB_PORT_GUID,
 IFLA_VF_VLAN_LIST,
 IFLA_VF_BROADCAST,
 __IFLA_VF_MAX,
};



struct ifla_vf_mac {
 __u32 vf;
 __u8 mac[32];
};

struct ifla_vf_broadcast {
 __u8 broadcast[32];
};

struct ifla_vf_vlan {
 __u32 vf;
 __u32 vlan;
 __u32 qos;
};

enum {
 IFLA_VF_VLAN_INFO_UNSPEC,
 IFLA_VF_VLAN_INFO,
 __IFLA_VF_VLAN_INFO_MAX,
};




struct ifla_vf_vlan_info {
 __u32 vf;
 __u32 vlan;
 __u32 qos;
 __be16 vlan_proto;
};

struct ifla_vf_tx_rate {
 __u32 vf;
 __u32 rate;
};

struct ifla_vf_rate {
 __u32 vf;
 __u32 min_tx_rate;
 __u32 max_tx_rate;
};

struct ifla_vf_spoofchk {
 __u32 vf;
 __u32 setting;
};

struct ifla_vf_guid {
 __u32 vf;
 __u64 guid;
};

enum {
 IFLA_VF_LINK_STATE_AUTO,
 IFLA_VF_LINK_STATE_ENABLE,
 IFLA_VF_LINK_STATE_DISABLE,
 __IFLA_VF_LINK_STATE_MAX,
};

struct ifla_vf_link_state {
 __u32 vf;
 __u32 link_state;
};

struct ifla_vf_rss_query_en {
 __u32 vf;
 __u32 setting;
};

enum {
 IFLA_VF_STATS_RX_PACKETS,
 IFLA_VF_STATS_TX_PACKETS,
 IFLA_VF_STATS_RX_BYTES,
 IFLA_VF_STATS_TX_BYTES,
 IFLA_VF_STATS_BROADCAST,
 IFLA_VF_STATS_MULTICAST,
 IFLA_VF_STATS_PAD,
 IFLA_VF_STATS_RX_DROPPED,
 IFLA_VF_STATS_TX_DROPPED,
 __IFLA_VF_STATS_MAX,
};



struct ifla_vf_trust {
 __u32 vf;
 __u32 setting;
};
# 1680 "../include/uapi/linux/if_link.h"
enum {
 IFLA_VF_PORT_UNSPEC,
 IFLA_VF_PORT,
 __IFLA_VF_PORT_MAX,
};



enum {
 IFLA_PORT_UNSPEC,
 IFLA_PORT_VF,
 IFLA_PORT_PROFILE,
 IFLA_PORT_VSI_TYPE,
 IFLA_PORT_INSTANCE_UUID,
 IFLA_PORT_HOST_UUID,
 IFLA_PORT_REQUEST,
 IFLA_PORT_RESPONSE,
 __IFLA_PORT_MAX,
};







enum {
 PORT_REQUEST_PREASSOCIATE = 0,
 PORT_REQUEST_PREASSOCIATE_RR,
 PORT_REQUEST_ASSOCIATE,
 PORT_REQUEST_DISASSOCIATE,
};

enum {
 PORT_VDP_RESPONSE_SUCCESS = 0,
 PORT_VDP_RESPONSE_INVALID_FORMAT,
 PORT_VDP_RESPONSE_INSUFFICIENT_RESOURCES,
 PORT_VDP_RESPONSE_UNUSED_VTID,
 PORT_VDP_RESPONSE_VTID_VIOLATION,
 PORT_VDP_RESPONSE_VTID_VERSION_VIOALTION,
 PORT_VDP_RESPONSE_OUT_OF_SYNC,

 PORT_PROFILE_RESPONSE_SUCCESS = 0x100,
 PORT_PROFILE_RESPONSE_INPROGRESS,
 PORT_PROFILE_RESPONSE_INVALID,
 PORT_PROFILE_RESPONSE_BADSTATE,
 PORT_PROFILE_RESPONSE_INSUFFICIENT_RESOURCES,
 PORT_PROFILE_RESPONSE_ERROR,
};

struct ifla_port_vsi {
 __u8 vsi_mgr_id;
 __u8 vsi_type_id[3];
 __u8 vsi_type_version;
 __u8 pad[3];
};




enum {
 IFLA_IPOIB_UNSPEC,
 IFLA_IPOIB_PKEY,
 IFLA_IPOIB_MODE,
 IFLA_IPOIB_UMCAST,
 __IFLA_IPOIB_MAX
};

enum {
 IPOIB_MODE_DATAGRAM = 0,
 IPOIB_MODE_CONNECTED = 1,
};







enum {
 HSR_PROTOCOL_HSR,
 HSR_PROTOCOL_PRP,
 HSR_PROTOCOL_MAX,
};

enum {
 IFLA_HSR_UNSPEC,
 IFLA_HSR_SLAVE1,
 IFLA_HSR_SLAVE2,
 IFLA_HSR_MULTICAST_SPEC,
 IFLA_HSR_SUPERVISION_ADDR,
 IFLA_HSR_SEQ_NR,
 IFLA_HSR_VERSION,
 IFLA_HSR_PROTOCOL,


 IFLA_HSR_INTERLINK,
 __IFLA_HSR_MAX,
};





struct if_stats_msg {
 __u8 family;
 __u8 pad1;
 __u16 pad2;
 __u32 ifindex;
 __u32 filter_mask;
};




enum {
 IFLA_STATS_UNSPEC,
 IFLA_STATS_LINK_64,
 IFLA_STATS_LINK_XSTATS,
 IFLA_STATS_LINK_XSTATS_SLAVE,
 IFLA_STATS_LINK_OFFLOAD_XSTATS,
 IFLA_STATS_AF_SPEC,
 __IFLA_STATS_MAX,
};





enum {
 IFLA_STATS_GETSET_UNSPEC,
 IFLA_STATS_GET_FILTERS,


 IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS,
 __IFLA_STATS_GETSET_MAX,
};
# 1825 "../include/uapi/linux/if_link.h"
enum {
 LINK_XSTATS_TYPE_UNSPEC,
 LINK_XSTATS_TYPE_BRIDGE,
 LINK_XSTATS_TYPE_BOND,
 __LINK_XSTATS_TYPE_MAX
};



enum {
 IFLA_OFFLOAD_XSTATS_UNSPEC,
 IFLA_OFFLOAD_XSTATS_CPU_HIT,
 IFLA_OFFLOAD_XSTATS_HW_S_INFO,
 IFLA_OFFLOAD_XSTATS_L3_STATS,
 __IFLA_OFFLOAD_XSTATS_MAX
};


enum {
 IFLA_OFFLOAD_XSTATS_HW_S_INFO_UNSPEC,
 IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST,
 IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED,
 __IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX,
};
# 1866 "../include/uapi/linux/if_link.h"
enum {
 XDP_ATTACHED_NONE = 0,
 XDP_ATTACHED_DRV,
 XDP_ATTACHED_SKB,
 XDP_ATTACHED_HW,
 XDP_ATTACHED_MULTI,
};

enum {
 IFLA_XDP_UNSPEC,
 IFLA_XDP_FD,
 IFLA_XDP_ATTACHED,
 IFLA_XDP_FLAGS,
 IFLA_XDP_PROG_ID,
 IFLA_XDP_DRV_PROG_ID,
 IFLA_XDP_SKB_PROG_ID,
 IFLA_XDP_HW_PROG_ID,
 IFLA_XDP_EXPECTED_FD,
 __IFLA_XDP_MAX,
};



enum {
 IFLA_EVENT_NONE,
 IFLA_EVENT_REBOOT,
 IFLA_EVENT_FEATURES,
 IFLA_EVENT_BONDING_FAILOVER,
 IFLA_EVENT_NOTIFY_PEERS,
 IFLA_EVENT_IGMP_RESEND,
 IFLA_EVENT_BONDING_OPTIONS,
};



enum {
 IFLA_TUN_UNSPEC,
 IFLA_TUN_OWNER,
 IFLA_TUN_GROUP,
 IFLA_TUN_TYPE,
 IFLA_TUN_PI,
 IFLA_TUN_VNET_HDR,
 IFLA_TUN_PERSIST,
 IFLA_TUN_MULTI_QUEUE,
 IFLA_TUN_NUM_QUEUES,
 IFLA_TUN_NUM_DISABLED_QUEUES,
 __IFLA_TUN_MAX,
};
# 1926 "../include/uapi/linux/if_link.h"
enum {
 IFLA_RMNET_UNSPEC,
 IFLA_RMNET_MUX_ID,
 IFLA_RMNET_FLAGS,
 __IFLA_RMNET_MAX,
};



struct ifla_rmnet_flags {
 __u32 flags;
 __u32 mask;
};



enum {
 IFLA_MCTP_UNSPEC,
 IFLA_MCTP_NET,
 __IFLA_MCTP_MAX,
};





enum {
 IFLA_DSA_UNSPEC,
 IFLA_DSA_CONDUIT,

 IFLA_DSA_MASTER = IFLA_DSA_CONDUIT,
 __IFLA_DSA_MAX,
};
# 6 "../include/linux/if_link.h" 2



struct ifla_vf_stats {
 __u64 rx_packets;
 __u64 tx_packets;
 __u64 rx_bytes;
 __u64 tx_bytes;
 __u64 broadcast;
 __u64 multicast;
 __u64 rx_dropped;
 __u64 tx_dropped;
};

struct ifla_vf_info {
 __u32 vf;
 __u8 mac[32];
 __u32 vlan;
 __u32 qos;
 __u32 spoofchk;
 __u32 linkstate;
 __u32 min_tx_rate;
 __u32 max_tx_rate;
 __u32 rss_query_en;
 __u32 trusted;
 __be16 vlan_proto;
};
# 33 "../include/uapi/linux/netdevice.h" 2
# 49 "../include/uapi/linux/netdevice.h"
enum {
        IF_PORT_UNKNOWN = 0,
        IF_PORT_10BASE2,
        IF_PORT_10BASET,
        IF_PORT_AUI,
        IF_PORT_100BASET,
        IF_PORT_100BASETX,
        IF_PORT_100BASEFX
};
# 48 "../include/linux/netdevice.h" 2
# 1 "../include/uapi/linux/if_bonding.h" 1
# 109 "../include/uapi/linux/if_bonding.h"
typedef struct ifbond {
 __s32 bond_mode;
 __s32 num_slaves;
 __s32 miimon;
} ifbond;

typedef struct ifslave {
 __s32 slave_id;
 char slave_name[16];
 __s8 link;
 __s8 state;
 __u32 link_failure_count;
} ifslave;

struct ad_info {
 __u16 aggregator_id;
 __u16 ports;
 __u16 actor_key;
 __u16 partner_key;
 __u8 partner_system[6];
};


enum {
 BOND_XSTATS_UNSPEC,
 BOND_XSTATS_3AD,
 __BOND_XSTATS_MAX
};



enum {
 BOND_3AD_STAT_LACPDU_RX,
 BOND_3AD_STAT_LACPDU_TX,
 BOND_3AD_STAT_LACPDU_UNKNOWN_RX,
 BOND_3AD_STAT_LACPDU_ILLEGAL_RX,
 BOND_3AD_STAT_MARKER_RX,
 BOND_3AD_STAT_MARKER_TX,
 BOND_3AD_STAT_MARKER_RESP_RX,
 BOND_3AD_STAT_MARKER_RESP_TX,
 BOND_3AD_STAT_MARKER_UNKNOWN_RX,
 BOND_3AD_STAT_PAD,
 __BOND_3AD_STAT_MAX
};
# 49 "../include/linux/netdevice.h" 2

# 1 "../include/uapi/linux/netdev.h" 1
# 28 "../include/uapi/linux/netdev.h"
enum netdev_xdp_act {
 NETDEV_XDP_ACT_BASIC = 1,
 NETDEV_XDP_ACT_REDIRECT = 2,
 NETDEV_XDP_ACT_NDO_XMIT = 4,
 NETDEV_XDP_ACT_XSK_ZEROCOPY = 8,
 NETDEV_XDP_ACT_HW_OFFLOAD = 16,
 NETDEV_XDP_ACT_RX_SG = 32,
 NETDEV_XDP_ACT_NDO_XMIT_SG = 64,


 NETDEV_XDP_ACT_MASK = 127,
};
# 50 "../include/uapi/linux/netdev.h"
enum netdev_xdp_rx_metadata {
 NETDEV_XDP_RX_METADATA_TIMESTAMP = 1,
 NETDEV_XDP_RX_METADATA_HASH = 2,
 NETDEV_XDP_RX_METADATA_VLAN_TAG = 4,
};
# 63 "../include/uapi/linux/netdev.h"
enum netdev_xsk_flags {
 NETDEV_XSK_FLAGS_TX_TIMESTAMP = 1,
 NETDEV_XSK_FLAGS_TX_CHECKSUM = 2,
};

enum netdev_queue_type {
 NETDEV_QUEUE_TYPE_RX,
 NETDEV_QUEUE_TYPE_TX,
};

enum netdev_qstats_scope {
 NETDEV_QSTATS_SCOPE_QUEUE = 1,
};

enum {
 NETDEV_A_DEV_IFINDEX = 1,
 NETDEV_A_DEV_PAD,
 NETDEV_A_DEV_XDP_FEATURES,
 NETDEV_A_DEV_XDP_ZC_MAX_SEGS,
 NETDEV_A_DEV_XDP_RX_METADATA_FEATURES,
 NETDEV_A_DEV_XSK_FEATURES,

 __NETDEV_A_DEV_MAX,
 NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)
};

enum {
 NETDEV_A_PAGE_POOL_ID = 1,
 NETDEV_A_PAGE_POOL_IFINDEX,
 NETDEV_A_PAGE_POOL_NAPI_ID,
 NETDEV_A_PAGE_POOL_INFLIGHT,
 NETDEV_A_PAGE_POOL_INFLIGHT_MEM,
 NETDEV_A_PAGE_POOL_DETACH_TIME,

 __NETDEV_A_PAGE_POOL_MAX,
 NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1)
};

enum {
 NETDEV_A_PAGE_POOL_STATS_INFO = 1,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_FAST = 8,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW_HIGH_ORDER,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_EMPTY,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_REFILL,
 NETDEV_A_PAGE_POOL_STATS_ALLOC_WAIVE,
 NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHED,
 NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHE_FULL,
 NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING,
 NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING_FULL,
 NETDEV_A_PAGE_POOL_STATS_RECYCLE_RELEASED_REFCNT,

 __NETDEV_A_PAGE_POOL_STATS_MAX,
 NETDEV_A_PAGE_POOL_STATS_MAX = (__NETDEV_A_PAGE_POOL_STATS_MAX - 1)
};

enum {
 NETDEV_A_NAPI_IFINDEX = 1,
 NETDEV_A_NAPI_ID,
 NETDEV_A_NAPI_IRQ,
 NETDEV_A_NAPI_PID,

 __NETDEV_A_NAPI_MAX,
 NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
};

enum {
 NETDEV_A_QUEUE_ID = 1,
 NETDEV_A_QUEUE_IFINDEX,
 NETDEV_A_QUEUE_TYPE,
 NETDEV_A_QUEUE_NAPI_ID,

 __NETDEV_A_QUEUE_MAX,
 NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};

enum {
 NETDEV_A_QSTATS_IFINDEX = 1,
 NETDEV_A_QSTATS_QUEUE_TYPE,
 NETDEV_A_QSTATS_QUEUE_ID,
 NETDEV_A_QSTATS_SCOPE,
 NETDEV_A_QSTATS_RX_PACKETS = 8,
 NETDEV_A_QSTATS_RX_BYTES,
 NETDEV_A_QSTATS_TX_PACKETS,
 NETDEV_A_QSTATS_TX_BYTES,
 NETDEV_A_QSTATS_RX_ALLOC_FAIL,
 NETDEV_A_QSTATS_RX_HW_DROPS,
 NETDEV_A_QSTATS_RX_HW_DROP_OVERRUNS,
 NETDEV_A_QSTATS_RX_CSUM_COMPLETE,
 NETDEV_A_QSTATS_RX_CSUM_UNNECESSARY,
 NETDEV_A_QSTATS_RX_CSUM_NONE,
 NETDEV_A_QSTATS_RX_CSUM_BAD,
 NETDEV_A_QSTATS_RX_HW_GRO_PACKETS,
 NETDEV_A_QSTATS_RX_HW_GRO_BYTES,
 NETDEV_A_QSTATS_RX_HW_GRO_WIRE_PACKETS,
 NETDEV_A_QSTATS_RX_HW_GRO_WIRE_BYTES,
 NETDEV_A_QSTATS_RX_HW_DROP_RATELIMITS,
 NETDEV_A_QSTATS_TX_HW_DROPS,
 NETDEV_A_QSTATS_TX_HW_DROP_ERRORS,
 NETDEV_A_QSTATS_TX_CSUM_NONE,
 NETDEV_A_QSTATS_TX_NEEDS_CSUM,
 NETDEV_A_QSTATS_TX_HW_GSO_PACKETS,
 NETDEV_A_QSTATS_TX_HW_GSO_BYTES,
 NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS,
 NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES,
 NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS,
 NETDEV_A_QSTATS_TX_STOP,
 NETDEV_A_QSTATS_TX_WAKE,

 __NETDEV_A_QSTATS_MAX,
 NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1)
};

enum {
 NETDEV_CMD_DEV_GET = 1,
 NETDEV_CMD_DEV_ADD_NTF,
 NETDEV_CMD_DEV_DEL_NTF,
 NETDEV_CMD_DEV_CHANGE_NTF,
 NETDEV_CMD_PAGE_POOL_GET,
 NETDEV_CMD_PAGE_POOL_ADD_NTF,
 NETDEV_CMD_PAGE_POOL_DEL_NTF,
 NETDEV_CMD_PAGE_POOL_CHANGE_NTF,
 NETDEV_CMD_PAGE_POOL_STATS_GET,
 NETDEV_CMD_QUEUE_GET,
 NETDEV_CMD_NAPI_GET,
 NETDEV_CMD_QSTATS_GET,

 __NETDEV_CMD_MAX,
 NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
};
# 51 "../include/linux/netdevice.h" 2
# 1 "../include/linux/hashtable.h" 1
# 34 "../include/linux/hashtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __hash_init(struct hlist_head *ht, unsigned int sz)
{
 unsigned int i;

 for (i = 0; i < sz; i++)
  ((&ht[i])->first = ((void *)0));
}
# 76 "../include/linux/hashtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool hash_hashed(struct hlist_node *node)
{
 return !hlist_unhashed(node);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __hash_empty(struct hlist_head *ht, unsigned int sz)
{
 unsigned int i;

 for (i = 0; i < sz; i++)
  if (!hlist_empty(&ht[i]))
   return false;

 return true;
}
# 105 "../include/linux/hashtable.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hash_del(struct hlist_node *node)
{
 hlist_del_init(node);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hash_del_rcu(struct hlist_node *node)
{
 hlist_del_init_rcu(node);
}
# 52 "../include/linux/netdevice.h" 2





struct netpoll_info;
struct device;
struct ethtool_ops;
struct kernel_hwtstamp_config;
struct phy_device;
struct dsa_port;
struct ip_tunnel_parm_kern;
struct macsec_context;
struct macsec_ops;
struct netdev_name_node;
struct sd_flow_limit;
struct sfp_bus;

struct wireless_dev;

struct wpan_dev;
struct mpls_dev;

struct udp_tunnel_info;
struct udp_tunnel_nic_info;
struct udp_tunnel_nic;
struct bpf_prog;
struct xdp_buff;
struct xdp_frame;
struct xdp_metadata_ops;
struct xdp_md;
struct ethtool_netdev_state;

typedef u32 xdp_features_t;

void synchronize_net(void);
void netdev_set_default_ethtool_ops(struct net_device *dev,
        const struct ethtool_ops *ops);
void netdev_sw_irq_coalesce_default_on(struct net_device *dev);
# 130 "../include/linux/netdevice.h"
enum netdev_tx {
 __NETDEV_TX_MIN = (-((int)(~0U >> 1)) - 1),
 NETDEV_TX_OK = 0x00,
 NETDEV_TX_BUSY = 0x10,
};
typedef enum netdev_tx netdev_tx_t;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_xmit_complete(int rc)
{






 if (__builtin_expect(!!(rc < 0x0f), 1))
  return true;

 return false;
}
# 190 "../include/linux/netdevice.h"
struct net_device_stats {
 union { unsigned long rx_packets; atomic_long_t __rx_packets; };
 union { unsigned long tx_packets; atomic_long_t __tx_packets; };
 union { unsigned long rx_bytes; atomic_long_t __rx_bytes; };
 union { unsigned long tx_bytes; atomic_long_t __tx_bytes; };
 union { unsigned long rx_errors; atomic_long_t __rx_errors; };
 union { unsigned long tx_errors; atomic_long_t __tx_errors; };
 union { unsigned long rx_dropped; atomic_long_t __rx_dropped; };
 union { unsigned long tx_dropped; atomic_long_t __tx_dropped; };
 union { unsigned long multicast; atomic_long_t __multicast; };
 union { unsigned long collisions; atomic_long_t __collisions; };
 union { unsigned long rx_length_errors; atomic_long_t __rx_length_errors; };
 union { unsigned long rx_over_errors; atomic_long_t __rx_over_errors; };
 union { unsigned long rx_crc_errors; atomic_long_t __rx_crc_errors; };
 union { unsigned long rx_frame_errors; atomic_long_t __rx_frame_errors; };
 union { unsigned long rx_fifo_errors; atomic_long_t __rx_fifo_errors; };
 union { unsigned long rx_missed_errors; atomic_long_t __rx_missed_errors; };
 union { unsigned long tx_aborted_errors; atomic_long_t __tx_aborted_errors; };
 union { unsigned long tx_carrier_errors; atomic_long_t __tx_carrier_errors; };
 union { unsigned long tx_fifo_errors; atomic_long_t __tx_fifo_errors; };
 union { unsigned long tx_heartbeat_errors; atomic_long_t __tx_heartbeat_errors; };
 union { unsigned long tx_window_errors; atomic_long_t __tx_window_errors; };
 union { unsigned long rx_compressed; atomic_long_t __rx_compressed; };
 union { unsigned long tx_compressed; atomic_long_t __tx_compressed; };
};





struct net_device_core_stats {
 unsigned long rx_dropped;
 unsigned long tx_dropped;
 unsigned long rx_nohandler;
 unsigned long rx_otherhost_dropped;
} __attribute__((__aligned__(4 * sizeof(unsigned long))));




struct neighbour;
struct neigh_parms;
struct sk_buff;

struct netdev_hw_addr {
 struct list_head list;
 struct rb_node node;
 unsigned char addr[32];
 unsigned char type;




 bool global_use;
 int sync_cnt;
 int refcount;
 int synced;
 struct callback_head callback_head;
};

struct netdev_hw_addr_list {
 struct list_head list;
 int count;


 struct rb_root tree;
};
# 279 "../include/linux/netdevice.h"
struct hh_cache {
 unsigned int hh_len;
 seqlock_t hh_lock;







 unsigned long hh_data[(((96)+(16 -1))&~(16 - 1)) / sizeof(long)];
};
# 307 "../include/linux/netdevice.h"
struct header_ops {
 int (*create) (struct sk_buff *skb, struct net_device *dev,
      unsigned short type, const void *daddr,
      const void *saddr, unsigned int len);
 int (*parse)(const struct sk_buff *skb, unsigned char *haddr);
 int (*cache)(const struct neighbour *neigh, struct hh_cache *hh, __be16 type);
 void (*cache_update)(struct hh_cache *hh,
    const struct net_device *dev,
    const unsigned char *haddr);
 bool (*validate)(const char *ll_header, unsigned int len);
 __be16 (*parse_protocol)(const struct sk_buff *skb);
};






enum netdev_state_t {
 __LINK_STATE_START,
 __LINK_STATE_PRESENT,
 __LINK_STATE_NOCARRIER,
 __LINK_STATE_LINKWATCH_PENDING,
 __LINK_STATE_DORMANT,
 __LINK_STATE_TESTING,
};

struct gro_list {
 struct list_head list;
 int count;
};
# 348 "../include/linux/netdevice.h"
struct napi_struct {






 struct list_head poll_list;

 unsigned long state;
 int weight;
 int defer_hard_irqs_count;
 unsigned long gro_bitmask;
 int (*poll)(struct napi_struct *, int);





 int list_owner;
 struct net_device *dev;
 struct gro_list gro_hash[8];
 struct sk_buff *skb;
 struct list_head rx_list;
 int rx_count;
 unsigned int napi_id;
 struct hrtimer timer;
 struct task_struct *thread;

 struct list_head dev_list;
 struct hlist_node napi_hash_node;
 int irq;
};

enum {
 NAPI_STATE_SCHED,
 NAPI_STATE_MISSED,
 NAPI_STATE_DISABLE,
 NAPI_STATE_NPSVC,
 NAPI_STATE_LISTED,
 NAPI_STATE_NO_BUSY_POLL,
 NAPI_STATE_IN_BUSY_POLL,
 NAPI_STATE_PREFER_BUSY_POLL,
 NAPI_STATE_THREADED,
 NAPI_STATE_SCHED_THREADED,
};

enum {
 NAPIF_STATE_SCHED = ((((1UL))) << (NAPI_STATE_SCHED)),
 NAPIF_STATE_MISSED = ((((1UL))) << (NAPI_STATE_MISSED)),
 NAPIF_STATE_DISABLE = ((((1UL))) << (NAPI_STATE_DISABLE)),
 NAPIF_STATE_NPSVC = ((((1UL))) << (NAPI_STATE_NPSVC)),
 NAPIF_STATE_LISTED = ((((1UL))) << (NAPI_STATE_LISTED)),
 NAPIF_STATE_NO_BUSY_POLL = ((((1UL))) << (NAPI_STATE_NO_BUSY_POLL)),
 NAPIF_STATE_IN_BUSY_POLL = ((((1UL))) << (NAPI_STATE_IN_BUSY_POLL)),
 NAPIF_STATE_PREFER_BUSY_POLL = ((((1UL))) << (NAPI_STATE_PREFER_BUSY_POLL)),
 NAPIF_STATE_THREADED = ((((1UL))) << (NAPI_STATE_THREADED)),
 NAPIF_STATE_SCHED_THREADED = ((((1UL))) << (NAPI_STATE_SCHED_THREADED)),
};

enum gro_result {
 GRO_MERGED,
 GRO_MERGED_FREE,
 GRO_HELD,
 GRO_NORMAL,
 GRO_CONSUMED,
};
typedef enum gro_result gro_result_t;
# 458 "../include/linux/netdevice.h"
enum rx_handler_result {
 RX_HANDLER_CONSUMED,
 RX_HANDLER_ANOTHER,
 RX_HANDLER_EXACT,
 RX_HANDLER_PASS,
};
typedef enum rx_handler_result rx_handler_result_t;
typedef rx_handler_result_t rx_handler_func_t(struct sk_buff **pskb);

void __napi_schedule(struct napi_struct *n);
void __napi_schedule_irqoff(struct napi_struct *n);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_disable_pending(struct napi_struct *n)
{
 return ((__builtin_constant_p(NAPI_STATE_DISABLE) && __builtin_constant_p((uintptr_t)(&n->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&n->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&n->state))) ? const_test_bit(NAPI_STATE_DISABLE, &n->state) : arch_test_bit(NAPI_STATE_DISABLE, &n->state));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_prefer_busy_poll(struct napi_struct *n)
{
 return ((__builtin_constant_p(NAPI_STATE_PREFER_BUSY_POLL) && __builtin_constant_p((uintptr_t)(&n->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&n->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&n->state))) ? const_test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state) : arch_test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state));
}
# 498 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_is_scheduled(struct napi_struct *n)
{
 return ((__builtin_constant_p(NAPI_STATE_SCHED) && __builtin_constant_p((uintptr_t)(&n->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&n->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&n->state))) ? const_test_bit(NAPI_STATE_SCHED, &n->state) : arch_test_bit(NAPI_STATE_SCHED, &n->state));
}

bool napi_schedule_prep(struct napi_struct *n);
# 515 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_schedule(struct napi_struct *n)
{
 if (napi_schedule_prep(n)) {
  __napi_schedule(n);
  return true;
 }

 return false;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void napi_schedule_irqoff(struct napi_struct *n)
{
 if (napi_schedule_prep(n))
  __napi_schedule_irqoff(n);
}
# 547 "../include/linux/netdevice.h"
bool napi_complete_done(struct napi_struct *n, int work_done);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_complete(struct napi_struct *n)
{
 return napi_complete_done(n, 0);
}

int dev_set_threaded(struct net_device *dev, bool threaded);
# 563 "../include/linux/netdevice.h"
void napi_disable(struct napi_struct *n);

void napi_enable(struct napi_struct *n);
# 575 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void napi_synchronize(const struct napi_struct *n)
{
 if (0)
  while (((__builtin_constant_p(NAPI_STATE_SCHED) && __builtin_constant_p((uintptr_t)(&n->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&n->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&n->state))) ? const_test_bit(NAPI_STATE_SCHED, &n->state) : arch_test_bit(NAPI_STATE_SCHED, &n->state)))
   msleep(1);
 else
  __asm__ __volatile__("": : :"memory");
}
# 592 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool napi_if_scheduled_mark_missed(struct napi_struct *n)
{
 unsigned long val, new;

 val = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_336(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->state) == sizeof(char) || sizeof(n->state) == sizeof(short) || sizeof(n->state) == sizeof(int) || sizeof(n->state) == sizeof(long)) || sizeof(n->state) == sizeof(long long))) __compiletime_assert_336(); } while (0); (*(const volatile typeof( _Generic((n->state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (n->state))) *)&(n->state)); });
 do {
  if (val & NAPIF_STATE_DISABLE)
   return true;

  if (!(val & NAPIF_STATE_SCHED))
   return false;

  new = val | NAPIF_STATE_MISSED;
 } while (!({ typeof(&n->state) __ai_ptr = (&n->state); typeof(&val) __ai_oldp = (&val); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); ({ typeof(*(__ai_ptr)) *___op = (__ai_oldp), ___o = *___op, ___r; ___r = ({ __typeof__((__ai_ptr)) __ptr = ((__ai_ptr)); __typeof__(*((__ai_ptr))) __old = (___o); __typeof__(*((__ai_ptr))) __new = ((new)); __typeof__(*((__ai_ptr))) __oldval = 0; asm volatile( "1:	%0 = memw_locked(%1);\n" "	{ P0 = cmp.eq(%0,%2);\n" "	  if (!P0.new) jump:nt 2f; }\n" "	memw_locked(%1,p0) = %3;\n" "	if (!P0) jump 1b;\n" "2:\n" : "=&r" (__oldval) : "r" (__ptr), "r" (__old), "r" (__new) : "memory", "p0" ); __oldval; }); if (__builtin_expect(!!(___r != ___o), 0)) *___op = ___r; __builtin_expect(!!(___r == ___o), 1); }); }));

 return true;
}

enum netdev_queue_state_t {
 __QUEUE_STATE_DRV_XOFF,
 __QUEUE_STATE_STACK_XOFF,
 __QUEUE_STATE_FROZEN,
};
# 636 "../include/linux/netdevice.h"
struct netdev_queue {



 struct net_device *dev;
 netdevice_tracker dev_tracker;

 struct Qdisc *qdisc;
 struct Qdisc *qdisc_sleeping;

 struct kobject kobj;




 unsigned long tx_maxrate;




 atomic_long_t trans_timeout;


 struct net_device *sb_dev;

 struct xsk_buff_pool *pool;




 struct napi_struct *napi;



 spinlock_t _xmit_lock ;
 int xmit_lock_owner;



 unsigned long trans_start;

 unsigned long state;




} ;

extern int sysctl_fb_tunnels_only_for_init_net;
extern int sysctl_devconf_inherit_init_net;






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool net_has_fallback_tunnels(const struct net *net)
{

 int fb_tunnels_only_for_init_net = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_337(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sysctl_fb_tunnels_only_for_init_net) == sizeof(char) || sizeof(sysctl_fb_tunnels_only_for_init_net) == sizeof(short) || sizeof(sysctl_fb_tunnels_only_for_init_net) == sizeof(int) || sizeof(sysctl_fb_tunnels_only_for_init_net) == sizeof(long)) || sizeof(sysctl_fb_tunnels_only_for_init_net) == sizeof(long long))) __compiletime_assert_337(); } while (0); (*(const volatile typeof( _Generic((sysctl_fb_tunnels_only_for_init_net), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sysctl_fb_tunnels_only_for_init_net))) *)&(sysctl_fb_tunnels_only_for_init_net)); });

 return !fb_tunnels_only_for_init_net ||
  (net_eq(net, &init_net) && fb_tunnels_only_for_init_net == 1);



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int net_inherit_devconf(void)
{

 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_338(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sysctl_devconf_inherit_init_net) == sizeof(char) || sizeof(sysctl_devconf_inherit_init_net) == sizeof(short) || sizeof(sysctl_devconf_inherit_init_net) == sizeof(int) || sizeof(sysctl_devconf_inherit_init_net) == sizeof(long)) || sizeof(sysctl_devconf_inherit_init_net) == sizeof(long long))) __compiletime_assert_338(); } while (0); (*(const volatile typeof( _Generic((sysctl_devconf_inherit_init_net), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sysctl_devconf_inherit_init_net))) *)&(sysctl_devconf_inherit_init_net)); });



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netdev_queue_numa_node_read(const struct netdev_queue *q)
{



 return (-1);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_queue_numa_node_write(struct netdev_queue *q, int node)
{



}







enum xps_map_type {
 XPS_CPUS = 0,
 XPS_RXQS,
 XPS_MAPS_MAX,
};
# 785 "../include/linux/netdevice.h"
struct netdev_tc_txq {
 u16 count;
 u16 offset;
};
# 812 "../include/linux/netdevice.h"
struct netdev_phys_item_id {
 unsigned char id[32];
 unsigned char id_len;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netdev_phys_item_id_same(struct netdev_phys_item_id *a,
         struct netdev_phys_item_id *b)
{
 return a->id_len == b->id_len &&
        memcmp(a->id, b->id, a->id_len) == 0;
}

typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
           struct sk_buff *skb,
           struct net_device *sb_dev);

enum net_device_path_type {
 DEV_PATH_ETHERNET = 0,
 DEV_PATH_VLAN,
 DEV_PATH_BRIDGE,
 DEV_PATH_PPPOE,
 DEV_PATH_DSA,
 DEV_PATH_MTK_WDMA,
};

struct net_device_path {
 enum net_device_path_type type;
 const struct net_device *dev;
 union {
  struct {
   u16 id;
   __be16 proto;
   u8 h_dest[6];
  } encap;
  struct {
   enum {
    DEV_PATH_BR_VLAN_KEEP,
    DEV_PATH_BR_VLAN_TAG,
    DEV_PATH_BR_VLAN_UNTAG,
    DEV_PATH_BR_VLAN_UNTAG_HW,
   } vlan_mode;
   u16 vlan_id;
   __be16 vlan_proto;
  } bridge;
  struct {
   int port;
   u16 proto;
  } dsa;
  struct {
   u8 wdma_idx;
   u8 queue;
   u16 wcid;
   u8 bss;
   u8 amsdu;
  } mtk_wdma;
 };
};




struct net_device_path_stack {
 int num_paths;
 struct net_device_path path[5];
};

struct net_device_path_ctx {
 const struct net_device *dev;
 u8 daddr[6];

 int num_vlans;
 struct {
  u16 id;
  __be16 proto;
 } vlan[2];
};

enum tc_setup_type {
 TC_QUERY_CAPS,
 TC_SETUP_QDISC_MQPRIO,
 TC_SETUP_CLSU32,
 TC_SETUP_CLSFLOWER,
 TC_SETUP_CLSMATCHALL,
 TC_SETUP_CLSBPF,
 TC_SETUP_BLOCK,
 TC_SETUP_QDISC_CBS,
 TC_SETUP_QDISC_RED,
 TC_SETUP_QDISC_PRIO,
 TC_SETUP_QDISC_MQ,
 TC_SETUP_QDISC_ETF,
 TC_SETUP_ROOT_QDISC,
 TC_SETUP_QDISC_GRED,
 TC_SETUP_QDISC_TAPRIO,
 TC_SETUP_FT,
 TC_SETUP_QDISC_ETS,
 TC_SETUP_QDISC_TBF,
 TC_SETUP_QDISC_FIFO,
 TC_SETUP_QDISC_HTB,
 TC_SETUP_ACT,
};




enum bpf_netdev_command {







 XDP_SETUP_PROG,
 XDP_SETUP_PROG_HW,

 BPF_OFFLOAD_MAP_ALLOC,
 BPF_OFFLOAD_MAP_FREE,
 XDP_SETUP_XSK_POOL,
};

struct bpf_prog_offload_ops;
struct netlink_ext_ack;
struct xdp_umem;
struct xdp_dev_bulk_queue;
struct bpf_xdp_link;

enum bpf_xdp_mode {
 XDP_MODE_SKB = 0,
 XDP_MODE_DRV = 1,
 XDP_MODE_HW = 2,
 __MAX_XDP_MODE
};

struct bpf_xdp_entity {
 struct bpf_prog *prog;
 struct bpf_xdp_link *link;
};

struct netdev_bpf {
 enum bpf_netdev_command command;
 union {

  struct {
   u32 flags;
   struct bpf_prog *prog;
   struct netlink_ext_ack *extack;
  };

  struct {
   struct bpf_offloaded_map *offmap;
  };

  struct {
   struct xsk_buff_pool *pool;
   u16 queue_id;
  } xsk;
 };
};






struct xfrmdev_ops {
 int (*xdo_dev_state_add) (struct xfrm_state *x, struct netlink_ext_ack *extack);
 void (*xdo_dev_state_delete) (struct xfrm_state *x);
 void (*xdo_dev_state_free) (struct xfrm_state *x);
 bool (*xdo_dev_offload_ok) (struct sk_buff *skb,
           struct xfrm_state *x);
 void (*xdo_dev_state_advance_esn) (struct xfrm_state *x);
 void (*xdo_dev_state_update_stats) (struct xfrm_state *x);
 int (*xdo_dev_policy_add) (struct xfrm_policy *x, struct netlink_ext_ack *extack);
 void (*xdo_dev_policy_delete) (struct xfrm_policy *x);
 void (*xdo_dev_policy_free) (struct xfrm_policy *x);
};


struct dev_ifalias {
 struct callback_head rcuhead;
 char ifalias[];
};

struct devlink;
struct tlsdev_ops;

struct netdev_net_notifier {
 struct list_head list;
 struct notifier_block *nb;
};
# 1357 "../include/linux/netdevice.h"
struct net_device_ops {
 int (*ndo_init)(struct net_device *dev);
 void (*ndo_uninit)(struct net_device *dev);
 int (*ndo_open)(struct net_device *dev);
 int (*ndo_stop)(struct net_device *dev);
 netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb,
        struct net_device *dev);
 netdev_features_t (*ndo_features_check)(struct sk_buff *skb,
            struct net_device *dev,
            netdev_features_t features);
 u16 (*ndo_select_queue)(struct net_device *dev,
          struct sk_buff *skb,
          struct net_device *sb_dev);
 void (*ndo_change_rx_flags)(struct net_device *dev,
             int flags);
 void (*ndo_set_rx_mode)(struct net_device *dev);
 int (*ndo_set_mac_address)(struct net_device *dev,
             void *addr);
 int (*ndo_validate_addr)(struct net_device *dev);
 int (*ndo_do_ioctl)(struct net_device *dev,
             struct ifreq *ifr, int cmd);
 int (*ndo_eth_ioctl)(struct net_device *dev,
       struct ifreq *ifr, int cmd);
 int (*ndo_siocbond)(struct net_device *dev,
      struct ifreq *ifr, int cmd);
 int (*ndo_siocwandev)(struct net_device *dev,
        struct if_settings *ifs);
 int (*ndo_siocdevprivate)(struct net_device *dev,
            struct ifreq *ifr,
            void *data, int cmd);
 int (*ndo_set_config)(struct net_device *dev,
               struct ifmap *map);
 int (*ndo_change_mtu)(struct net_device *dev,
        int new_mtu);
 int (*ndo_neigh_setup)(struct net_device *dev,
         struct neigh_parms *);
 void (*ndo_tx_timeout) (struct net_device *dev,
         unsigned int txqueue);

 void (*ndo_get_stats64)(struct net_device *dev,
         struct rtnl_link_stats64 *storage);
 bool (*ndo_has_offload_stats)(const struct net_device *dev, int attr_id);
 int (*ndo_get_offload_stats)(int attr_id,
        const struct net_device *dev,
        void *attr_data);
 struct net_device_stats* (*ndo_get_stats)(struct net_device *dev);

 int (*ndo_vlan_rx_add_vid)(struct net_device *dev,
             __be16 proto, u16 vid);
 int (*ndo_vlan_rx_kill_vid)(struct net_device *dev,
              __be16 proto, u16 vid);






 int (*ndo_set_vf_mac)(struct net_device *dev,
        int queue, u8 *mac);
 int (*ndo_set_vf_vlan)(struct net_device *dev,
         int queue, u16 vlan,
         u8 qos, __be16 proto);
 int (*ndo_set_vf_rate)(struct net_device *dev,
         int vf, int min_tx_rate,
         int max_tx_rate);
 int (*ndo_set_vf_spoofchk)(struct net_device *dev,
             int vf, bool setting);
 int (*ndo_set_vf_trust)(struct net_device *dev,
          int vf, bool setting);
 int (*ndo_get_vf_config)(struct net_device *dev,
           int vf,
           struct ifla_vf_info *ivf);
 int (*ndo_set_vf_link_state)(struct net_device *dev,
        int vf, int link_state);
 int (*ndo_get_vf_stats)(struct net_device *dev,
          int vf,
          struct ifla_vf_stats
          *vf_stats);
 int (*ndo_set_vf_port)(struct net_device *dev,
         int vf,
         struct nlattr *port[]);
 int (*ndo_get_vf_port)(struct net_device *dev,
         int vf, struct sk_buff *skb);
 int (*ndo_get_vf_guid)(struct net_device *dev,
         int vf,
         struct ifla_vf_guid *node_guid,
         struct ifla_vf_guid *port_guid);
 int (*ndo_set_vf_guid)(struct net_device *dev,
         int vf, u64 guid,
         int guid_type);
 int (*ndo_set_vf_rss_query_en)(
         struct net_device *dev,
         int vf, bool setting);
 int (*ndo_setup_tc)(struct net_device *dev,
      enum tc_setup_type type,
      void *type_data);
# 1483 "../include/linux/netdevice.h"
 int (*ndo_add_slave)(struct net_device *dev,
       struct net_device *slave_dev,
       struct netlink_ext_ack *extack);
 int (*ndo_del_slave)(struct net_device *dev,
       struct net_device *slave_dev);
 struct net_device* (*ndo_get_xmit_slave)(struct net_device *dev,
            struct sk_buff *skb,
            bool all_slaves);
 struct net_device* (*ndo_sk_get_lower_dev)(struct net_device *dev,
       struct sock *sk);
 netdev_features_t (*ndo_fix_features)(struct net_device *dev,
          netdev_features_t features);
 int (*ndo_set_features)(struct net_device *dev,
          netdev_features_t features);
 int (*ndo_neigh_construct)(struct net_device *dev,
             struct neighbour *n);
 void (*ndo_neigh_destroy)(struct net_device *dev,
           struct neighbour *n);

 int (*ndo_fdb_add)(struct ndmsg *ndm,
            struct nlattr *tb[],
            struct net_device *dev,
            const unsigned char *addr,
            u16 vid,
            u16 flags,
            struct netlink_ext_ack *extack);
 int (*ndo_fdb_del)(struct ndmsg *ndm,
            struct nlattr *tb[],
            struct net_device *dev,
            const unsigned char *addr,
            u16 vid, struct netlink_ext_ack *extack);
 int (*ndo_fdb_del_bulk)(struct nlmsghdr *nlh,
          struct net_device *dev,
          struct netlink_ext_ack *extack);
 int (*ndo_fdb_dump)(struct sk_buff *skb,
      struct netlink_callback *cb,
      struct net_device *dev,
      struct net_device *filter_dev,
      int *idx);
 int (*ndo_fdb_get)(struct sk_buff *skb,
            struct nlattr *tb[],
            struct net_device *dev,
            const unsigned char *addr,
            u16 vid, u32 portid, u32 seq,
            struct netlink_ext_ack *extack);
 int (*ndo_mdb_add)(struct net_device *dev,
            struct nlattr *tb[],
            u16 nlmsg_flags,
            struct netlink_ext_ack *extack);
 int (*ndo_mdb_del)(struct net_device *dev,
            struct nlattr *tb[],
            struct netlink_ext_ack *extack);
 int (*ndo_mdb_del_bulk)(struct net_device *dev,
          struct nlattr *tb[],
          struct netlink_ext_ack *extack);
 int (*ndo_mdb_dump)(struct net_device *dev,
      struct sk_buff *skb,
      struct netlink_callback *cb);
 int (*ndo_mdb_get)(struct net_device *dev,
            struct nlattr *tb[], u32 portid,
            u32 seq,
            struct netlink_ext_ack *extack);
 int (*ndo_bridge_setlink)(struct net_device *dev,
            struct nlmsghdr *nlh,
            u16 flags,
            struct netlink_ext_ack *extack);
 int (*ndo_bridge_getlink)(struct sk_buff *skb,
            u32 pid, u32 seq,
            struct net_device *dev,
            u32 filter_mask,
            int nlflags);
 int (*ndo_bridge_dellink)(struct net_device *dev,
            struct nlmsghdr *nlh,
            u16 flags);
 int (*ndo_change_carrier)(struct net_device *dev,
            bool new_carrier);
 int (*ndo_get_phys_port_id)(struct net_device *dev,
       struct netdev_phys_item_id *ppid);
 int (*ndo_get_port_parent_id)(struct net_device *dev,
         struct netdev_phys_item_id *ppid);
 int (*ndo_get_phys_port_name)(struct net_device *dev,
         char *name, size_t len);
 void* (*ndo_dfwd_add_station)(struct net_device *pdev,
       struct net_device *dev);
 void (*ndo_dfwd_del_station)(struct net_device *pdev,
       void *priv);

 int (*ndo_set_tx_maxrate)(struct net_device *dev,
            int queue_index,
            u32 maxrate);
 int (*ndo_get_iflink)(const struct net_device *dev);
 int (*ndo_fill_metadata_dst)(struct net_device *dev,
             struct sk_buff *skb);
 void (*ndo_set_rx_headroom)(struct net_device *dev,
             int needed_headroom);
 int (*ndo_bpf)(struct net_device *dev,
        struct netdev_bpf *bpf);
 int (*ndo_xdp_xmit)(struct net_device *dev, int n,
      struct xdp_frame **xdp,
      u32 flags);
 struct net_device * (*ndo_xdp_get_xmit_slave)(struct net_device *dev,
         struct xdp_buff *xdp);
 int (*ndo_xsk_wakeup)(struct net_device *dev,
        u32 queue_id, u32 flags);
 int (*ndo_tunnel_ctl)(struct net_device *dev,
        struct ip_tunnel_parm_kern *p,
        int cmd);
 struct net_device * (*ndo_get_peer_dev)(struct net_device *dev);
 int (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx,
                                                         struct net_device_path *path);
 ktime_t (*ndo_get_tstamp)(struct net_device *dev,
        const struct skb_shared_hwtstamps *hwtstamps,
        bool cycles);
 int (*ndo_hwtstamp_get)(struct net_device *dev,
          struct kernel_hwtstamp_config *kernel_config);
 int (*ndo_hwtstamp_set)(struct net_device *dev,
          struct kernel_hwtstamp_config *kernel_config,
          struct netlink_ext_ack *extack);
};
# 1655 "../include/linux/netdevice.h"
enum netdev_priv_flags {
 IFF_802_1Q_VLAN = 1<<0,
 IFF_EBRIDGE = 1<<1,
 IFF_BONDING = 1<<2,
 IFF_ISATAP = 1<<3,
 IFF_WAN_HDLC = 1<<4,
 IFF_XMIT_DST_RELEASE = 1<<5,
 IFF_DONT_BRIDGE = 1<<6,
 IFF_DISABLE_NETPOLL = 1<<7,
 IFF_MACVLAN_PORT = 1<<8,
 IFF_BRIDGE_PORT = 1<<9,
 IFF_OVS_DATAPATH = 1<<10,
 IFF_TX_SKB_SHARING = 1<<11,
 IFF_UNICAST_FLT = 1<<12,
 IFF_TEAM_PORT = 1<<13,
 IFF_SUPP_NOFCS = 1<<14,
 IFF_LIVE_ADDR_CHANGE = 1<<15,
 IFF_MACVLAN = 1<<16,
 IFF_XMIT_DST_RELEASE_PERM = 1<<17,
 IFF_L3MDEV_MASTER = 1<<18,
 IFF_NO_QUEUE = 1<<19,
 IFF_OPENVSWITCH = 1<<20,
 IFF_L3MDEV_SLAVE = 1<<21,
 IFF_TEAM = 1<<22,
 IFF_RXFH_CONFIGURED = 1<<23,
 IFF_PHONY_HEADROOM = 1<<24,
 IFF_MACSEC = 1<<25,
 IFF_NO_RX_HANDLER = 1<<26,
 IFF_FAILOVER = 1<<27,
 IFF_FAILOVER_SLAVE = 1<<28,
 IFF_L3MDEV_RX_HANDLER = 1<<29,
 IFF_NO_ADDRCONF = ((((1ULL))) << (30)),
 IFF_TX_SKB_NO_LINEAR = ((((1ULL))) << (31)),
 IFF_CHANGE_PROTO_DOWN = ((((1ULL))) << (32)),
 IFF_SEE_ALL_HWTSTAMP_REQUESTS = ((((1ULL))) << (33)),
};
# 1725 "../include/linux/netdevice.h"
enum netdev_ml_priv_type {
 ML_PRIV_NONE,
 ML_PRIV_CAN,
};

enum netdev_stat_type {
 NETDEV_PCPU_STAT_NONE,
 NETDEV_PCPU_STAT_LSTATS,
 NETDEV_PCPU_STAT_TSTATS,
 NETDEV_PCPU_STAT_DSTATS,
};

enum netdev_reg_state {
 NETREG_UNINITIALIZED = 0,
 NETREG_REGISTERED,
 NETREG_UNREGISTERING,
 NETREG_UNREGISTERED,
 NETREG_RELEASED,
 NETREG_DUMMY,
};
# 2035 "../include/linux/netdevice.h"
struct net_device {






 __u8 __cacheline_group_begin__net_device_read_tx[0];
 unsigned long long priv_flags;
 const struct net_device_ops *netdev_ops;
 const struct header_ops *header_ops;
 struct netdev_queue *_tx;
 netdev_features_t gso_partial_features;
 unsigned int real_num_tx_queues;
 unsigned int gso_max_size;
 unsigned int gso_ipv4_max_size;
 u16 gso_max_segs;
 s16 num_tc;





 unsigned int mtu;
 unsigned short needed_headroom;
 struct netdev_tc_txq tc_to_txq[16];







 struct bpf_mprog_entry *tcx_egress;

 __u8 __cacheline_group_end__net_device_read_tx[0];


 __u8 __cacheline_group_begin__net_device_read_txrx[0];
 union {
  struct pcpu_lstats *lstats;
  struct pcpu_sw_netstats *tstats;
  struct pcpu_dstats *dstats;
 };
 unsigned long state;
 unsigned int flags;
 unsigned short hard_header_len;
 netdev_features_t features;
 struct inet6_dev *ip6_ptr;
 __u8 __cacheline_group_end__net_device_read_txrx[0];


 __u8 __cacheline_group_begin__net_device_read_rx[0];
 struct bpf_prog *xdp_prog;
 struct list_head ptype_specific;
 int ifindex;
 unsigned int real_num_rx_queues;
 struct netdev_rx_queue *_rx;
 unsigned long gro_flush_timeout;
 int napi_defer_hard_irqs;
 unsigned int gro_max_size;
 unsigned int gro_ipv4_max_size;
 rx_handler_func_t *rx_handler;
 void *rx_handler_data;
 possible_net_t nd_net;




 struct bpf_mprog_entry *tcx_ingress;

 __u8 __cacheline_group_end__net_device_read_rx[0];

 char name[16];
 struct netdev_name_node *name_node;
 struct dev_ifalias *ifalias;




 unsigned long mem_end;
 unsigned long mem_start;
 unsigned long base_addr;
# 2126 "../include/linux/netdevice.h"
 struct list_head dev_list;
 struct list_head napi_list;
 struct list_head unreg_list;
 struct list_head close_list;
 struct list_head ptype_all;

 struct {
  struct list_head upper;
  struct list_head lower;
 } adj_list;


 xdp_features_t xdp_features;
 const struct xdp_metadata_ops *xdp_metadata_ops;
 const struct xsk_tx_metadata_ops *xsk_tx_metadata_ops;
 unsigned short gflags;

 unsigned short needed_tailroom;

 netdev_features_t hw_features;
 netdev_features_t wanted_features;
 netdev_features_t vlan_features;
 netdev_features_t hw_enc_features;
 netdev_features_t mpls_features;

 unsigned int min_mtu;
 unsigned int max_mtu;
 unsigned short type;
 unsigned char min_header_len;
 unsigned char name_assign_type;

 int group;

 struct net_device_stats stats;

 struct net_device_core_stats *core_stats;


 atomic_t carrier_up_count;
 atomic_t carrier_down_count;





 const struct ethtool_ops *ethtool_ops;




 const struct ndisc_ops *ndisc_ops;



 const struct xfrmdev_ops *xfrmdev_ops;



 const struct tlsdev_ops *tlsdev_ops;


 unsigned int operstate;
 unsigned char link_mode;

 unsigned char if_port;
 unsigned char dma;


 unsigned char perm_addr[32];
 unsigned char addr_assign_type;
 unsigned char addr_len;
 unsigned char upper_level;
 unsigned char lower_level;

 unsigned short neigh_priv_len;
 unsigned short dev_id;
 unsigned short dev_port;
 int irq;
 u32 priv_len;

 spinlock_t addr_list_lock;

 struct netdev_hw_addr_list uc;
 struct netdev_hw_addr_list mc;
 struct netdev_hw_addr_list dev_addrs;


 struct kset *queues_kset;


 struct list_head unlink_list;

 unsigned int promiscuity;
 unsigned int allmulti;
 bool uc_promisc;

 unsigned char nested_level;




 struct in_device *ip_ptr;

 struct vlan_info *vlan_info;





 struct tipc_bearer *tipc_ptr;
# 2244 "../include/linux/netdevice.h"
 struct wireless_dev *ieee80211_ptr;


 struct wpan_dev *ieee802154_ptr;





 struct mctp_dev *mctp_ptr;






 const unsigned char *dev_addr;

 unsigned int num_rx_queues;





 unsigned int xdp_zc_max_segs;
 struct netdev_queue *ingress_queue;




 unsigned char broadcast[32];



 struct hlist_node index_hlist;




 unsigned int num_tx_queues;
 struct Qdisc *qdisc;
 unsigned int tx_queue_len;
 spinlock_t tx_global_lock;

 struct xdp_dev_bulk_queue *xdp_bulkq;


 struct hlist_head qdisc_hash[1 << (4)];


 struct timer_list watchdog_timer;
 int watchdog_timeo;

 u32 proto_down_reason;

 struct list_head todo_list;




 refcount_t dev_refcnt;

 struct ref_tracker_dir refcnt_tracker;

 struct list_head link_watch_list;

 u8 reg_state;

 bool dismantle;

 enum {
  RTNL_LINK_INITIALIZED,
  RTNL_LINK_INITIALIZING,
 } rtnl_link_state:16;

 bool needs_free_netdev;
 void (*priv_destructor)(struct net_device *dev);


 void *ml_priv;
 enum netdev_ml_priv_type ml_priv_type;

 enum netdev_stat_type pcpu_stat_type:8;
# 2335 "../include/linux/netdevice.h"
 struct dm_hw_stat_delta *dm_private;

 struct device dev;
 const struct attribute_group *sysfs_groups[4];
 const struct attribute_group *sysfs_rx_queue_group;

 const struct rtnl_link_ops *rtnl_link_ops;

 const struct netdev_stat_ops *stat_ops;

 const struct netdev_queue_mgmt_ops *queue_mgmt_ops;
# 2357 "../include/linux/netdevice.h"
 unsigned int tso_max_size;

 u16 tso_max_segs;




 u8 prio_tc_map[15 + 1];







 struct phy_device *phydev;
 struct sfp_bus *sfp_bus;
 struct lock_class_key *qdisc_tx_busylock;
 bool proto_down;
 bool threaded;

 struct list_head net_notifier_list;





 const struct udp_tunnel_nic_info *udp_tunnel_nic_info;
 struct udp_tunnel_nic *udp_tunnel_nic;

 struct ethtool_netdev_state *ethtool;


 struct bpf_xdp_entity xdp_state[__MAX_XDP_MODE];

 u8 dev_addr_shadow[32];
 netdevice_tracker linkwatch_dev_tracker;
 netdevice_tracker watchdog_dev_tracker;
 netdevice_tracker dev_registered_tracker;
 struct rtnl_hw_stats64 *offload_xstats_l3;

 struct devlink_port *devlink_port;






 struct hlist_head page_pools;



 struct dim_irq_moder *irq_moder;

 u8 priv[] __attribute__((__aligned__((1 << (5)))))
           __attribute__((__counted_by__(priv_len)));
} __attribute__((__aligned__((1 << (5)))));
# 2427 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_elide_gro(const struct net_device *dev)
{
 if (!(dev->features & ((netdev_features_t)1 << (NETIF_F_GRO_BIT))) || dev->xdp_prog)
  return true;
 return false;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int netdev_get_prio_tc_map(const struct net_device *dev, u32 prio)
{
 return dev->prio_tc_map[prio & 15];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int netdev_set_prio_tc_map(struct net_device *dev, u8 prio, u8 tc)
{
 if (tc >= dev->num_tc)
  return -22;

 dev->prio_tc_map[prio & 15] = tc & 15;
 return 0;
}

int netdev_txq_to_tc(struct net_device *dev, unsigned int txq);
void netdev_reset_tc(struct net_device *dev);
int netdev_set_tc_queue(struct net_device *dev, u8 tc, u16 count, u16 offset);
int netdev_set_num_tc(struct net_device *dev, u8 num_tc);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int netdev_get_num_tc(struct net_device *dev)
{
 return dev->num_tc;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void net_prefetch(void *p)
{
 __builtin_prefetch(p);

 __builtin_prefetch((u8 *)p + (1 << (5)));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void net_prefetchw(void *p)
{
 __builtin_prefetch(p,1);

 __builtin_prefetch((u8 *)p + (1 << (5)),1);

}

void netdev_unbind_sb_channel(struct net_device *dev,
         struct net_device *sb_dev);
int netdev_bind_sb_channel_queue(struct net_device *dev,
     struct net_device *sb_dev,
     u8 tc, u16 count, u16 offset);
int netdev_set_sb_channel(struct net_device *dev, u16 channel);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netdev_get_sb_channel(struct net_device *dev)
{
 return ({ int __UNIQUE_ID_x_339 = (-dev->num_tc); int __UNIQUE_ID_y_340 = (0); ((__UNIQUE_ID_x_339) > (__UNIQUE_ID_y_340) ? (__UNIQUE_ID_x_339) : (__UNIQUE_ID_y_340)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev,
      unsigned int index)
{
 (void)({ bool __ret_do_once = !!(index >= dev->num_tx_queues); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/netdevice.h", 2494, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return &dev->_tx[index];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct netdev_queue *skb_get_tx_queue(const struct net_device *dev,
          const struct sk_buff *skb)
{
 return netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_for_each_tx_queue(struct net_device *dev,
         void (*f)(struct net_device *,
            struct netdev_queue *,
            void *),
         void *arg)
{
 unsigned int i;

 for (i = 0; i < dev->num_tx_queues; i++)
  f(dev, &dev->_tx[i], arg);
}
# 2531 "../include/linux/netdevice.h"
u16 netdev_pick_tx(struct net_device *dev, struct sk_buff *skb,
       struct net_device *sb_dev);
struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
      struct sk_buff *skb,
      struct net_device *sb_dev);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned netdev_get_fwd_headroom(struct net_device *dev)
{
 return dev->priv_flags & IFF_PHONY_HEADROOM ? 0 : dev->needed_headroom;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_set_rx_headroom(struct net_device *dev, int new_hr)
{
 if (dev->netdev_ops->ndo_set_rx_headroom)
  dev->netdev_ops->ndo_set_rx_headroom(dev, new_hr);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_reset_rx_headroom(struct net_device *dev)
{
 netdev_set_rx_headroom(dev, -1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *netdev_get_ml_priv(struct net_device *dev,
           enum netdev_ml_priv_type type)
{
 if (dev->ml_priv_type != type)
  return ((void *)0);

 return dev->ml_priv;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_set_ml_priv(struct net_device *dev,
          void *ml_priv,
          enum netdev_ml_priv_type type)
{
 ({ int __ret_warn_on = !!(dev->ml_priv_type && dev->ml_priv_type != type); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/netdevice.h", 2572, 9, "Overwriting already set ml_priv_type (%u) with different ml_priv_type (%u)!\n", dev->ml_priv_type, type); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });


 ({ int __ret_warn_on = !!(!dev->ml_priv_type && dev->ml_priv); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/netdevice.h", 2574, 9, "Overwriting already set ml_priv and ml_priv_type is ML_PRIV_NONE!\n"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });


 dev->ml_priv = ml_priv;
 dev->ml_priv_type = type;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct net *dev_net(const struct net_device *dev)
{
 return read_pnet(&dev->nd_net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void dev_net_set(struct net_device *dev, struct net *net)
{
 write_pnet(&dev->nd_net, net);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *netdev_priv(const struct net_device *dev)
{
 return (void *)dev->priv;
}
# 2617 "../include/linux/netdevice.h"
void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
     enum netdev_queue_type type,
     struct napi_struct *napi);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_napi_set_irq(struct napi_struct *napi, int irq)
{
 napi->irq = irq;
}






void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
      int (*poll)(struct napi_struct *, int), int weight);
# 2643 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
netif_napi_add(struct net_device *dev, struct napi_struct *napi,
        int (*poll)(struct napi_struct *, int))
{
 netif_napi_add_weight(dev, napi, poll, 64);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
netif_napi_add_tx_weight(struct net_device *dev,
    struct napi_struct *napi,
    int (*poll)(struct napi_struct *, int),
    int weight)
{
 set_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state);
 netif_napi_add_weight(dev, napi, poll, weight);
}
# 2670 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_napi_add_tx(struct net_device *dev,
         struct napi_struct *napi,
         int (*poll)(struct napi_struct *, int))
{
 netif_napi_add_tx_weight(dev, napi, poll, 64);
}
# 2685 "../include/linux/netdevice.h"
void __netif_napi_del(struct napi_struct *napi);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_napi_del(struct napi_struct *napi)
{
 __netif_napi_del(napi);
 synchronize_net();
}

struct packet_type {
 __be16 type;
 bool ignore_outgoing;
 struct net_device *dev;
 netdevice_tracker dev_tracker;
 int (*func) (struct sk_buff *,
      struct net_device *,
      struct packet_type *,
      struct net_device *);
 void (*list_func) (struct list_head *,
           struct packet_type *,
           struct net_device *);
 bool (*id_match)(struct packet_type *ptype,
         struct sock *sk);
 struct net *af_packet_net;
 void *af_packet_priv;
 struct list_head list;
};

struct offload_callbacks {
 struct sk_buff *(*gso_segment)(struct sk_buff *skb,
      netdev_features_t features);
 struct sk_buff *(*gro_receive)(struct list_head *head,
      struct sk_buff *skb);
 int (*gro_complete)(struct sk_buff *skb, int nhoff);
};

struct packet_offload {
 __be16 type;
 u16 priority;
 struct offload_callbacks callbacks;
 struct list_head list;
};


struct pcpu_sw_netstats {
 u64_stats_t rx_packets;
 u64_stats_t rx_bytes;
 u64_stats_t tx_packets;
 u64_stats_t tx_bytes;
 struct u64_stats_sync syncp;
} __attribute__((__aligned__(4 * sizeof(u64))));

struct pcpu_dstats {
 u64_stats_t rx_packets;
 u64_stats_t rx_bytes;
 u64_stats_t rx_drops;
 u64_stats_t tx_packets;
 u64_stats_t tx_bytes;
 u64_stats_t tx_drops;
 struct u64_stats_sync syncp;
} __attribute__((__aligned__(8 * sizeof(u64))));

struct pcpu_lstats {
 u64_stats_t packets;
 u64_stats_t bytes;
 struct u64_stats_sync syncp;
} __attribute__((__aligned__(2 * sizeof(u64))));

void dev_lstats_read(struct net_device *dev, u64 *packets, u64 *bytes);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_sw_netstats_rx_add(struct net_device *dev, unsigned int len)
{
 struct pcpu_sw_netstats *tstats = ({ (void)(0); ({ do { const void *__vpp_verify = (typeof((dev->tstats) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(dev->tstats)) *)(dev->tstats); }); });

 u64_stats_update_begin(&tstats->syncp);
 u64_stats_add(&tstats->rx_bytes, len);
 u64_stats_inc(&tstats->rx_packets);
 u64_stats_update_end(&tstats->syncp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_sw_netstats_tx_add(struct net_device *dev,
       unsigned int packets,
       unsigned int len)
{
 struct pcpu_sw_netstats *tstats = ({ (void)(0); ({ do { const void *__vpp_verify = (typeof((dev->tstats) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(dev->tstats)) *)(dev->tstats); }); });

 u64_stats_update_begin(&tstats->syncp);
 u64_stats_add(&tstats->tx_bytes, len);
 u64_stats_add(&tstats->tx_packets, packets);
 u64_stats_update_end(&tstats->syncp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_lstats_add(struct net_device *dev, unsigned int len)
{
 struct pcpu_lstats *lstats = ({ (void)(0); ({ do { const void *__vpp_verify = (typeof((dev->lstats) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(dev->lstats)) *)(dev->lstats); }); });

 u64_stats_update_begin(&lstats->syncp);
 u64_stats_add(&lstats->bytes, len);
 u64_stats_inc(&lstats->packets);
 u64_stats_update_end(&lstats->syncp);
}
# 2823 "../include/linux/netdevice.h"
enum netdev_lag_tx_type {
 NETDEV_LAG_TX_TYPE_UNKNOWN,
 NETDEV_LAG_TX_TYPE_RANDOM,
 NETDEV_LAG_TX_TYPE_BROADCAST,
 NETDEV_LAG_TX_TYPE_ROUNDROBIN,
 NETDEV_LAG_TX_TYPE_ACTIVEBACKUP,
 NETDEV_LAG_TX_TYPE_HASH,
};

enum netdev_lag_hash {
 NETDEV_LAG_HASH_NONE,
 NETDEV_LAG_HASH_L2,
 NETDEV_LAG_HASH_L34,
 NETDEV_LAG_HASH_L23,
 NETDEV_LAG_HASH_E23,
 NETDEV_LAG_HASH_E34,
 NETDEV_LAG_HASH_VLAN_SRCMAC,
 NETDEV_LAG_HASH_UNKNOWN,
};

struct netdev_lag_upper_info {
 enum netdev_lag_tx_type tx_type;
 enum netdev_lag_hash hash_type;
};

struct netdev_lag_lower_state_info {
 u8 link_up : 1,
    tx_enabled : 1;
};







enum netdev_cmd {
 NETDEV_UP = 1,
 NETDEV_DOWN,
 NETDEV_REBOOT,



 NETDEV_CHANGE,
 NETDEV_REGISTER,
 NETDEV_UNREGISTER,
 NETDEV_CHANGEMTU,
 NETDEV_CHANGEADDR,
 NETDEV_PRE_CHANGEADDR,
 NETDEV_GOING_DOWN,
 NETDEV_CHANGENAME,
 NETDEV_FEAT_CHANGE,
 NETDEV_BONDING_FAILOVER,
 NETDEV_PRE_UP,
 NETDEV_PRE_TYPE_CHANGE,
 NETDEV_POST_TYPE_CHANGE,
 NETDEV_POST_INIT,
 NETDEV_PRE_UNINIT,
 NETDEV_RELEASE,
 NETDEV_NOTIFY_PEERS,
 NETDEV_JOIN,
 NETDEV_CHANGEUPPER,
 NETDEV_RESEND_IGMP,
 NETDEV_PRECHANGEMTU,
 NETDEV_CHANGEINFODATA,
 NETDEV_BONDING_INFO,
 NETDEV_PRECHANGEUPPER,
 NETDEV_CHANGELOWERSTATE,
 NETDEV_UDP_TUNNEL_PUSH_INFO,
 NETDEV_UDP_TUNNEL_DROP_INFO,
 NETDEV_CHANGE_TX_QUEUE_LEN,
 NETDEV_CVLAN_FILTER_PUSH_INFO,
 NETDEV_CVLAN_FILTER_DROP_INFO,
 NETDEV_SVLAN_FILTER_PUSH_INFO,
 NETDEV_SVLAN_FILTER_DROP_INFO,
 NETDEV_OFFLOAD_XSTATS_ENABLE,
 NETDEV_OFFLOAD_XSTATS_DISABLE,
 NETDEV_OFFLOAD_XSTATS_REPORT_USED,
 NETDEV_OFFLOAD_XSTATS_REPORT_DELTA,
 NETDEV_XDP_FEAT_CHANGE,
};
const char *netdev_cmd_to_name(enum netdev_cmd cmd);

int register_netdevice_notifier(struct notifier_block *nb);
int unregister_netdevice_notifier(struct notifier_block *nb);
int register_netdevice_notifier_net(struct net *net, struct notifier_block *nb);
int unregister_netdevice_notifier_net(struct net *net,
          struct notifier_block *nb);
int register_netdevice_notifier_dev_net(struct net_device *dev,
     struct notifier_block *nb,
     struct netdev_net_notifier *nn);
int unregister_netdevice_notifier_dev_net(struct net_device *dev,
       struct notifier_block *nb,
       struct netdev_net_notifier *nn);

struct netdev_notifier_info {
 struct net_device *dev;
 struct netlink_ext_ack *extack;
};

struct netdev_notifier_info_ext {
 struct netdev_notifier_info info;
 union {
  u32 mtu;
 } ext;
};

struct netdev_notifier_change_info {
 struct netdev_notifier_info info;
 unsigned int flags_changed;
};

struct netdev_notifier_changeupper_info {
 struct netdev_notifier_info info;
 struct net_device *upper_dev;
 bool master;
 bool linking;
 void *upper_info;
};

struct netdev_notifier_changelowerstate_info {
 struct netdev_notifier_info info;
 void *lower_state_info;
};

struct netdev_notifier_pre_changeaddr_info {
 struct netdev_notifier_info info;
 const unsigned char *dev_addr;
};

enum netdev_offload_xstats_type {
 NETDEV_OFFLOAD_XSTATS_TYPE_L3 = 1,
};

struct netdev_notifier_offload_xstats_info {
 struct netdev_notifier_info info;
 enum netdev_offload_xstats_type type;

 union {

  struct netdev_notifier_offload_xstats_rd *report_delta;

  struct netdev_notifier_offload_xstats_ru *report_used;
 };
};

int netdev_offload_xstats_enable(struct net_device *dev,
     enum netdev_offload_xstats_type type,
     struct netlink_ext_ack *extack);
int netdev_offload_xstats_disable(struct net_device *dev,
      enum netdev_offload_xstats_type type);
bool netdev_offload_xstats_enabled(const struct net_device *dev,
       enum netdev_offload_xstats_type type);
int netdev_offload_xstats_get(struct net_device *dev,
         enum netdev_offload_xstats_type type,
         struct rtnl_hw_stats64 *stats, bool *used,
         struct netlink_ext_ack *extack);
void
netdev_offload_xstats_report_delta(struct netdev_notifier_offload_xstats_rd *rd,
       const struct rtnl_hw_stats64 *stats);
void
netdev_offload_xstats_report_used(struct netdev_notifier_offload_xstats_ru *ru);
void netdev_offload_xstats_push_delta(struct net_device *dev,
          enum netdev_offload_xstats_type type,
          const struct rtnl_hw_stats64 *stats);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_notifier_info_init(struct netdev_notifier_info *info,
          struct net_device *dev)
{
 info->dev = dev;
 info->extack = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net_device *
netdev_notifier_info_to_dev(const struct netdev_notifier_info *info)
{
 return info->dev;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct netlink_ext_ack *
netdev_notifier_info_to_extack(const struct netdev_notifier_info *info)
{
 return info->extack;
}

int call_netdevice_notifiers(unsigned long val, struct net_device *dev);
int call_netdevice_notifiers_info(unsigned long val,
      struct netdev_notifier_info *info);
# 3036 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net_device *next_net_device(struct net_device *dev)
{
 struct list_head *lh;
 struct net *net;

 net = dev_net(dev);
 lh = dev->dev_list.next;
 return lh == &net->dev_base_head ? ((void *)0) : ({ void *__mptr = (void *)(lh); _Static_assert(__builtin_types_compatible_p(typeof(*(lh)), typeof(((struct net_device *)0)->dev_list)) || __builtin_types_compatible_p(typeof(*(lh)), typeof(void)), "pointer type mismatch in container_of()"); ((struct net_device *)(__mptr - __builtin_offsetof(struct net_device, dev_list))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net_device *next_net_device_rcu(struct net_device *dev)
{
 struct list_head *lh;
 struct net *net;

 net = dev_net(dev);
 lh = ({ typeof(*((*((struct list_head **)(&(&dev->dev_list)->next))))) *__UNIQUE_ID_rcu341 = (typeof(*((*((struct list_head **)(&(&dev->dev_list)->next))))) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_342(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct list_head **)(&(&dev->dev_list)->next))))) == sizeof(char) || sizeof(((*((struct list_head **)(&(&dev->dev_list)->next))))) == sizeof(short) || sizeof(((*((struct list_head **)(&(&dev->dev_list)->next))))) == sizeof(int) || sizeof(((*((struct list_head **)(&(&dev->dev_list)->next))))) == sizeof(long)) || sizeof(((*((struct list_head **)(&(&dev->dev_list)->next))))) == sizeof(long long))) __compiletime_assert_342(); } while (0); (*(const volatile typeof( _Generic((((*((struct list_head **)(&(&dev->dev_list)->next))))), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (((*((struct list_head **)(&(&dev->dev_list)->next))))))) *)&(((*((struct list_head **)(&(&dev->dev_list)->next)))))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/netdevice.h", 3052, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*((*((struct list_head **)(&(&dev->dev_list)->next))))) *)(__UNIQUE_ID_rcu341)); });
 return lh == &net->dev_base_head ? ((void *)0) : ({ void *__mptr = (void *)(lh); _Static_assert(__builtin_types_compatible_p(typeof(*(lh)), typeof(((struct net_device *)0)->dev_list)) || __builtin_types_compatible_p(typeof(*(lh)), typeof(void)), "pointer type mismatch in container_of()"); ((struct net_device *)(__mptr - __builtin_offsetof(struct net_device, dev_list))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net_device *first_net_device(struct net *net)
{
 return list_empty(&net->dev_base_head) ? ((void *)0) :
  ({ void *__mptr = (void *)(net->dev_base_head.next); _Static_assert(__builtin_types_compatible_p(typeof(*(net->dev_base_head.next)), typeof(((struct net_device *)0)->dev_list)) || __builtin_types_compatible_p(typeof(*(net->dev_base_head.next)), typeof(void)), "pointer type mismatch in container_of()"); ((struct net_device *)(__mptr - __builtin_offsetof(struct net_device, dev_list))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net_device *first_net_device_rcu(struct net *net)
{
 struct list_head *lh = ({ typeof(*((*((struct list_head **)(&(&net->dev_base_head)->next))))) *__UNIQUE_ID_rcu343 = (typeof(*((*((struct list_head **)(&(&net->dev_base_head)->next))))) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_344(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct list_head **)(&(&net->dev_base_head)->next))))) == sizeof(char) || sizeof(((*((struct list_head **)(&(&net->dev_base_head)->next))))) == sizeof(short) || sizeof(((*((struct list_head **)(&(&net->dev_base_head)->next))))) == sizeof(int) || sizeof(((*((struct list_head **)(&(&net->dev_base_head)->next))))) == sizeof(long)) || sizeof(((*((struct list_head **)(&(&net->dev_base_head)->next))))) == sizeof(long long))) __compiletime_assert_344(); } while (0); (*(const volatile typeof( _Generic((((*((struct list_head **)(&(&net->dev_base_head)->next))))), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (((*((struct list_head **)(&(&net->dev_base_head)->next))))))) *)&(((*((struct list_head **)(&(&net->dev_base_head)->next)))))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/netdevice.h", 3064, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*((*((struct list_head **)(&(&net->dev_base_head)->next))))) *)(__UNIQUE_ID_rcu343)); });

 return lh == &net->dev_base_head ? ((void *)0) : ({ void *__mptr = (void *)(lh); _Static_assert(__builtin_types_compatible_p(typeof(*(lh)), typeof(((struct net_device *)0)->dev_list)) || __builtin_types_compatible_p(typeof(*(lh)), typeof(void)), "pointer type mismatch in container_of()"); ((struct net_device *)(__mptr - __builtin_offsetof(struct net_device, dev_list))); });
}

int netdev_boot_setup_check(struct net_device *dev);
struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type,
           const char *hwaddr);
struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type);
void dev_add_pack(struct packet_type *pt);
void dev_remove_pack(struct packet_type *pt);
void __dev_remove_pack(struct packet_type *pt);
void dev_add_offload(struct packet_offload *po);
void dev_remove_offload(struct packet_offload *po);

int dev_get_iflink(const struct net_device *dev);
int dev_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb);
int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
     struct net_device_path_stack *stack);
struct net_device *__dev_get_by_flags(struct net *net, unsigned short flags,
          unsigned short mask);
struct net_device *dev_get_by_name(struct net *net, const char *name);
struct net_device *dev_get_by_name_rcu(struct net *net, const char *name);
struct net_device *__dev_get_by_name(struct net *net, const char *name);
bool netdev_name_in_use(struct net *net, const char *name);
int dev_alloc_name(struct net_device *dev, const char *name);
int dev_open(struct net_device *dev, struct netlink_ext_ack *extack);
void dev_close(struct net_device *dev);
void dev_close_many(struct list_head *head, bool unlink);
void dev_disable_lro(struct net_device *dev);
int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *newskb);
u16 dev_pick_tx_zero(struct net_device *dev, struct sk_buff *skb,
       struct net_device *sb_dev);
u16 dev_pick_tx_cpu_id(struct net_device *dev, struct sk_buff *skb,
         struct net_device *sb_dev);

int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev);
int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_queue_xmit(struct sk_buff *skb)
{
 return __dev_queue_xmit(skb, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_queue_xmit_accel(struct sk_buff *skb,
           struct net_device *sb_dev)
{
 return __dev_queue_xmit(skb, sb_dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
{
 int ret;

 ret = __dev_direct_xmit(skb, queue_id);
 if (!dev_xmit_complete(ret))
  kfree_skb(skb);
 return ret;
}

int register_netdevice(struct net_device *dev);
void unregister_netdevice_queue(struct net_device *dev, struct list_head *head);
void unregister_netdevice_many(struct list_head *head);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unregister_netdevice(struct net_device *dev)
{
 unregister_netdevice_queue(dev, ((void *)0));
}

int netdev_refcnt_read(const struct net_device *dev);
void free_netdev(struct net_device *dev);
void init_dummy_netdev(struct net_device *dev);

struct net_device *netdev_get_xmit_slave(struct net_device *dev,
      struct sk_buff *skb,
      bool all_slaves);
struct net_device *netdev_sk_get_lowest_dev(struct net_device *dev,
         struct sock *sk);
struct net_device *dev_get_by_index(struct net *net, int ifindex);
struct net_device *__dev_get_by_index(struct net *net, int ifindex);
struct net_device *netdev_get_by_index(struct net *net, int ifindex,
           netdevice_tracker *tracker, gfp_t gfp);
struct net_device *netdev_get_by_name(struct net *net, const char *name,
          netdevice_tracker *tracker, gfp_t gfp);
struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
struct net_device *dev_get_by_napi_id(unsigned int napi_id);
void netdev_copy_name(struct net_device *dev, char *name);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_hard_header(struct sk_buff *skb, struct net_device *dev,
      unsigned short type,
      const void *daddr, const void *saddr,
      unsigned int len)
{
 if (!dev->header_ops || !dev->header_ops->create)
  return 0;

 return dev->header_ops->create(skb, dev, type, daddr, saddr, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_parse_header(const struct sk_buff *skb,
       unsigned char *haddr)
{
 const struct net_device *dev = skb->dev;

 if (!dev->header_ops || !dev->header_ops->parse)
  return 0;
 return dev->header_ops->parse(skb, haddr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be16 dev_parse_header_protocol(const struct sk_buff *skb)
{
 const struct net_device *dev = skb->dev;

 if (!dev->header_ops || !dev->header_ops->parse_protocol)
  return 0;
 return dev->header_ops->parse_protocol(skb);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_validate_header(const struct net_device *dev,
           char *ll_header, int len)
{
 if (__builtin_expect(!!(len >= dev->hard_header_len), 1))
  return true;
 if (len < dev->min_header_len)
  return false;

 if (capable(17)) {
  memset(ll_header + len, 0, dev->hard_header_len - len);
  return true;
 }

 if (dev->header_ops && dev->header_ops->validate)
  return dev->header_ops->validate(ll_header, len);

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_has_header(const struct net_device *dev)
{
 return dev->header_ops && dev->header_ops->create;
}




struct softnet_data {
 struct list_head poll_list;
 struct sk_buff_head process_queue;
 local_lock_t process_queue_bh_lock;


 unsigned int processed;
 unsigned int time_squeeze;




 unsigned int received_rps;
 bool in_net_rx_action;
 bool in_napi_threaded_poll;




 struct Qdisc *output_queue;
 struct Qdisc **output_queue_tailp;
 struct sk_buff *completion_queue;

 struct sk_buff_head xfrm_backlog;


 struct netdev_xmit xmit;
# 3248 "../include/linux/netdevice.h"
 struct sk_buff_head input_pkt_queue;
 struct napi_struct backlog;

 atomic_t dropped ;


 spinlock_t defer_lock ;
 int defer_count;
 int defer_ipi_scheduled;
 struct sk_buff *defer_list;
 call_single_data_t defer_csd;
};

extern __attribute__((__section__(".discard"))) __attribute__((unused)) char __pcpu_scope_softnet_data; extern __attribute__((section(".data" "..shared_aligned"))) __typeof__(struct softnet_data) softnet_data __attribute__((__aligned__((1 << (5)))));


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_recursion_level(void)
{
 return ({ typeof(softnet_data.xmit.recursion) pscr_ret__; do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(softnet_data.xmit.recursion)) { case 1: pscr_ret__ = ({ typeof(softnet_data.xmit.recursion) __ret; if ((sizeof(softnet_data.xmit.recursion) == sizeof(char) || sizeof(softnet_data.xmit.recursion) == sizeof(short) || sizeof(softnet_data.xmit.recursion) == sizeof(int) || sizeof(softnet_data.xmit.recursion) == sizeof(long))) __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_345(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long long))) __compiletime_assert_345(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 2: pscr_ret__ = ({ typeof(softnet_data.xmit.recursion) __ret; if ((sizeof(softnet_data.xmit.recursion) == sizeof(char) || sizeof(softnet_data.xmit.recursion) == sizeof(short) || sizeof(softnet_data.xmit.recursion) == sizeof(int) || sizeof(softnet_data.xmit.recursion) == sizeof(long))) __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_346(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long long))) __compiletime_assert_346(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 4: pscr_ret__ = ({ typeof(softnet_data.xmit.recursion) __ret; if ((sizeof(softnet_data.xmit.recursion) == sizeof(char) || sizeof(softnet_data.xmit.recursion) == sizeof(short) || sizeof(softnet_data.xmit.recursion) == sizeof(int) || sizeof(softnet_data.xmit.recursion) == sizeof(long))) __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_347(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long long))) __compiletime_assert_347(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; case 8: pscr_ret__ = ({ typeof(softnet_data.xmit.recursion) __ret; if ((sizeof(softnet_data.xmit.recursion) == sizeof(char) || sizeof(softnet_data.xmit.recursion) == sizeof(short) || sizeof(softnet_data.xmit.recursion) == sizeof(int) || sizeof(softnet_data.xmit.recursion) == sizeof(long))) __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; __asm__ __volatile__("": : :"memory"); ___ret = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_348(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(char) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(short) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(int) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long)) || sizeof(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })) == sizeof(long long))) __compiletime_assert_348(); } while (0); (*(const volatile typeof( _Generic((*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); })))) *)&(*({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }))); }); __asm__ __volatile__("": : :"memory"); ___ret; }); else __ret = ({ typeof(softnet_data.xmit.recursion) ___ret; unsigned long ___flags; do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); ___flags = arch_local_irq_save(); } while (0); ___ret = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.recursion)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.recursion))) *)(&(softnet_data.xmit.recursion)); }); }); }); do { ({ unsigned long __dummy; typeof(___flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(___flags); } while (0); ___ret; }); __ret; }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; });
}
# 3276 "../include/linux/netdevice.h"
void __netif_schedule(struct Qdisc *q);
void netif_schedule_queue(struct netdev_queue *txq);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_schedule_all(struct net_device *dev)
{
 unsigned int i;

 for (i = 0; i < dev->num_tx_queues; i++)
  netif_schedule_queue(netdev_get_tx_queue(dev, i));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void netif_tx_start_queue(struct netdev_queue *dev_queue)
{
 clear_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_start_queue(struct net_device *dev)
{
 netif_tx_start_queue(netdev_get_tx_queue(dev, 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_start_all_queues(struct net_device *dev)
{
 unsigned int i;

 for (i = 0; i < dev->num_tx_queues; i++) {
  struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
  netif_tx_start_queue(txq);
 }
}

void netif_tx_wake_queue(struct netdev_queue *dev_queue);
# 3322 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_wake_queue(struct net_device *dev)
{
 netif_tx_wake_queue(netdev_get_tx_queue(dev, 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_wake_all_queues(struct net_device *dev)
{
 unsigned int i;

 for (i = 0; i < dev->num_tx_queues; i++) {
  struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
  netif_tx_wake_queue(txq);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void netif_tx_stop_queue(struct netdev_queue *dev_queue)
{

 set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);
}
# 3350 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_stop_queue(struct net_device *dev)
{
 netif_tx_stop_queue(netdev_get_tx_queue(dev, 0));
}

void netif_tx_stop_all_queues(struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_tx_queue_stopped(const struct netdev_queue *dev_queue)
{
 return ((__builtin_constant_p(__QUEUE_STATE_DRV_XOFF) && __builtin_constant_p((uintptr_t)(&dev_queue->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev_queue->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev_queue->state))) ? const_test_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state) : arch_test_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_queue_stopped(const struct net_device *dev)
{
 return netif_tx_queue_stopped(netdev_get_tx_queue(dev, 0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_xmit_stopped(const struct netdev_queue *dev_queue)
{
 return dev_queue->state & ((1 << __QUEUE_STATE_DRV_XOFF) | (1 << __QUEUE_STATE_STACK_XOFF));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
netif_xmit_frozen_or_stopped(const struct netdev_queue *dev_queue)
{
 return dev_queue->state & (((1 << __QUEUE_STATE_DRV_XOFF) | (1 << __QUEUE_STATE_STACK_XOFF)) | (1 << __QUEUE_STATE_FROZEN));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
netif_xmit_frozen_or_drv_stopped(const struct netdev_queue *dev_queue)
{
 return dev_queue->state & ((1 << __QUEUE_STATE_DRV_XOFF) | (1 << __QUEUE_STATE_FROZEN));
}
# 3400 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_queue_set_dql_min_limit(struct netdev_queue *dev_queue,
        unsigned int min_limit)
{



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netdev_queue_dql_avail(const struct netdev_queue *txq)
{




 return 0;

}
# 3425 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_txq_bql_enqueue_prefetchw(struct netdev_queue *dev_queue)
{



}
# 3439 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_txq_bql_complete_prefetchw(struct netdev_queue *dev_queue)
{



}
# 3456 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_tx_sent_queue(struct netdev_queue *dev_queue,
     unsigned int bytes)
{
# 3478 "../include/linux/netdevice.h"
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __netdev_tx_sent_queue(struct netdev_queue *dev_queue,
       unsigned int bytes,
       bool xmit_more)
{
 if (xmit_more) {



  return netif_tx_queue_stopped(dev_queue);
 }
 netdev_tx_sent_queue(dev_queue, bytes);
 return true;
}
# 3510 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_sent_queue(struct net_device *dev, unsigned int bytes)
{
 netdev_tx_sent_queue(netdev_get_tx_queue(dev, 0), bytes);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __netdev_sent_queue(struct net_device *dev,
           unsigned int bytes,
           bool xmit_more)
{
 return __netdev_tx_sent_queue(netdev_get_tx_queue(dev, 0), bytes,
          xmit_more);
}
# 3532 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_tx_completed_queue(struct netdev_queue *dev_queue,
          unsigned int pkts, unsigned int bytes)
{
# 3554 "../include/linux/netdevice.h"
}
# 3566 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_completed_queue(struct net_device *dev,
       unsigned int pkts, unsigned int bytes)
{
 netdev_tx_completed_queue(netdev_get_tx_queue(dev, 0), pkts, bytes);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_tx_reset_queue(struct netdev_queue *q)
{




}
# 3587 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_reset_queue(struct net_device *dev_queue)
{
 netdev_tx_reset_queue(netdev_get_tx_queue(dev_queue, 0));
}
# 3600 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 netdev_cap_txqueue(struct net_device *dev, u16 queue_index)
{
 if (__builtin_expect(!!(queue_index >= dev->real_num_tx_queues), 0)) {
  do { if (net_ratelimit()) ({ do {} while (0); _printk("\001" "4" "%s selects TX queue %d, but real number of TX queues is %d\n", dev->name, queue_index, dev->real_num_tx_queues); }); } while (0);


  return 0;
 }

 return queue_index;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_running(const struct net_device *dev)
{
 return ((__builtin_constant_p(__LINK_STATE_START) && __builtin_constant_p((uintptr_t)(&dev->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev->state))) ? const_test_bit(__LINK_STATE_START, &dev->state) : arch_test_bit(__LINK_STATE_START, &dev->state));
}
# 3637 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_start_subqueue(struct net_device *dev, u16 queue_index)
{
 struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);

 netif_tx_start_queue(txq);
}
# 3651 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_stop_subqueue(struct net_device *dev, u16 queue_index)
{
 struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);
 netif_tx_stop_queue(txq);
}
# 3664 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __netif_subqueue_stopped(const struct net_device *dev,
         u16 queue_index)
{
 struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);

 return netif_tx_queue_stopped(txq);
}
# 3679 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_subqueue_stopped(const struct net_device *dev,
       struct sk_buff *skb)
{
 return __netif_subqueue_stopped(dev, skb_get_queue_mapping(skb));
}
# 3692 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_wake_subqueue(struct net_device *dev, u16 queue_index)
{
 struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_index);

 netif_tx_wake_queue(txq);
}
# 3789 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int netif_set_xps_queue(struct net_device *dev,
          const struct cpumask *mask,
          u16 index)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __netif_set_xps_queue(struct net_device *dev,
     const unsigned long *mask,
     u16 index, enum xps_map_type type)
{
 return 0;
}
# 3810 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_multiqueue(const struct net_device *dev)
{
 return dev->num_tx_queues > 1;
}

int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq);


int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq);
# 3827 "../include/linux/netdevice.h"
int netif_set_real_num_queues(struct net_device *dev,
         unsigned int txq, unsigned int rxq);

int netif_get_num_default_rss_queues(void);

void dev_kfree_skb_irq_reason(struct sk_buff *skb, enum skb_drop_reason reason);
void dev_kfree_skb_any_reason(struct sk_buff *skb, enum skb_drop_reason reason);
# 3854 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_kfree_skb_irq(struct sk_buff *skb)
{
 dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_consume_skb_irq(struct sk_buff *skb)
{
 dev_kfree_skb_irq_reason(skb, SKB_CONSUMED);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_kfree_skb_any(struct sk_buff *skb)
{
 dev_kfree_skb_any_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_consume_skb_any(struct sk_buff *skb)
{
 dev_kfree_skb_any_reason(skb, SKB_CONSUMED);
}

u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
        struct bpf_prog *xdp_prog);
void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog);
int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff **pskb);
int netif_rx(struct sk_buff *skb);
int __netif_rx(struct sk_buff *skb);

int netif_receive_skb(struct sk_buff *skb);
int netif_receive_skb_core(struct sk_buff *skb);
void netif_receive_skb_list_internal(struct list_head *head);
void netif_receive_skb_list(struct list_head *head);
gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb);
void napi_gro_flush(struct napi_struct *napi, bool flush_old);
struct sk_buff *napi_get_frags(struct napi_struct *napi);
void napi_get_frags_check(struct napi_struct *napi);
gro_result_t napi_gro_frags(struct napi_struct *napi);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void napi_free_frags(struct napi_struct *napi)
{
 kfree_skb(napi->skb);
 napi->skb = ((void *)0);
}

bool netdev_is_rx_handler_busy(struct net_device *dev);
int netdev_rx_handler_register(struct net_device *dev,
          rx_handler_func_t *rx_handler,
          void *rx_handler_data);
void netdev_rx_handler_unregister(struct net_device *dev);

bool dev_valid_name(const char *name);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool is_socket_ioctl_cmd(unsigned int cmd)
{
 return (((cmd) >> (0 +8)) & ((1 << 8)-1)) == 0x89;
}
int get_user_ifreq(struct ifreq *ifr, void **ifrdata, void *arg);
int put_user_ifreq(struct ifreq *ifr, void *arg);
int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr,
  void *data, bool *need_copyout);
int dev_ifconf(struct net *net, struct ifconf *ifc);
int generic_hwtstamp_get_lower(struct net_device *dev,
          struct kernel_hwtstamp_config *kernel_cfg);
int generic_hwtstamp_set_lower(struct net_device *dev,
          struct kernel_hwtstamp_config *kernel_cfg,
          struct netlink_ext_ack *extack);
int dev_ethtool(struct net *net, struct ifreq *ifr, void *userdata);
unsigned int dev_get_flags(const struct net_device *);
int __dev_change_flags(struct net_device *dev, unsigned int flags,
         struct netlink_ext_ack *extack);
int dev_change_flags(struct net_device *dev, unsigned int flags,
       struct netlink_ext_ack *extack);
int dev_set_alias(struct net_device *, const char *, size_t);
int dev_get_alias(const struct net_device *, char *, size_t);
int __dev_change_net_namespace(struct net_device *dev, struct net *net,
          const char *pat, int new_ifindex);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int dev_change_net_namespace(struct net_device *dev, struct net *net,
        const char *pat)
{
 return __dev_change_net_namespace(dev, net, pat, 0);
}
int __dev_set_mtu(struct net_device *, int);
int dev_set_mtu(struct net_device *, int);
int dev_pre_changeaddr_notify(struct net_device *dev, const char *addr,
         struct netlink_ext_ack *extack);
int dev_set_mac_address(struct net_device *dev, struct sockaddr *sa,
   struct netlink_ext_ack *extack);
int dev_set_mac_address_user(struct net_device *dev, struct sockaddr *sa,
        struct netlink_ext_ack *extack);
int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name);
int dev_get_port_parent_id(struct net_device *dev,
      struct netdev_phys_item_id *ppid, bool recurse);
bool netdev_port_same_parent_id(struct net_device *a, struct net_device *b);

struct sk_buff *validate_xmit_skb_list(struct sk_buff *skb, struct net_device *dev, bool *again);
struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
        struct netdev_queue *txq, int *ret);

int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
u8 dev_xdp_prog_count(struct net_device *dev);
u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode);

int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
int dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
int dev_forward_skb_nomtu(struct net_device *dev, struct sk_buff *skb);
bool is_skb_forwardable(const struct net_device *dev,
   const struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) bool __is_skb_forwardable(const struct net_device *dev,
       const struct sk_buff *skb,
       const bool check_mtu)
{
 const u32 vlan_hdr_len = 4;
 unsigned int len;

 if (!(dev->flags & IFF_UP))
  return false;

 if (!check_mtu)
  return true;

 len = dev->mtu + dev->hard_header_len + vlan_hdr_len;
 if (skb->len <= len)
  return true;




 if (skb_is_gso(skb))
  return true;

 return false;
}

void netdev_core_stats_inc(struct net_device *dev, u32 offset);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_core_stats_rx_dropped_inc(struct net_device *dev) { netdev_core_stats_inc(dev, __builtin_offsetof(struct net_device_core_stats, rx_dropped)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_core_stats_tx_dropped_inc(struct net_device *dev) { netdev_core_stats_inc(dev, __builtin_offsetof(struct net_device_core_stats, tx_dropped)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_core_stats_rx_nohandler_inc(struct net_device *dev) { netdev_core_stats_inc(dev, __builtin_offsetof(struct net_device_core_stats, rx_nohandler)); }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_core_stats_rx_otherhost_dropped_inc(struct net_device *dev) { netdev_core_stats_inc(dev, __builtin_offsetof(struct net_device_core_stats, rx_otherhost_dropped)); }


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int ____dev_forward_skb(struct net_device *dev,
            struct sk_buff *skb,
            const bool check_mtu)
{
 if (skb_orphan_frags(skb, ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT))))) ||
     __builtin_expect(!!(!__is_skb_forwardable(dev, skb, check_mtu)), 0)) {
  dev_core_stats_rx_dropped_inc(dev);
  kfree_skb(skb);
  return 1;
 }

 skb_scrub_packet(skb, !net_eq(dev_net(dev), dev_net(skb->dev)));
 skb->priority = 0;
 return 0;
}

bool dev_nit_active(struct net_device *dev);
void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dev_put(struct net_device *dev)
{
 if (dev) {



  refcount_dec(&dev->dev_refcnt);

 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dev_hold(struct net_device *dev)
{
 if (dev) {



  refcount_inc(&dev->dev_refcnt);

 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netdev_tracker_alloc(struct net_device *dev,
       netdevice_tracker *tracker,
       gfp_t gfp)
{

 ref_tracker_alloc(&dev->refcnt_tracker, tracker, gfp);

}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_tracker_alloc(struct net_device *dev,
     netdevice_tracker *tracker, gfp_t gfp)
{

 refcount_dec(&dev->refcnt_tracker.no_tracker);
 __netdev_tracker_alloc(dev, tracker, gfp);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_tracker_free(struct net_device *dev,
           netdevice_tracker *tracker)
{

 ref_tracker_free(&dev->refcnt_tracker, tracker);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_hold(struct net_device *dev,
          netdevice_tracker *tracker, gfp_t gfp)
{
 if (dev) {
  __dev_hold(dev);
  __netdev_tracker_alloc(dev, tracker, gfp);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_put(struct net_device *dev,
         netdevice_tracker *tracker)
{
 if (dev) {
  netdev_tracker_free(dev, tracker);
  __dev_put(dev);
 }
}
# 4096 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_hold(struct net_device *dev)
{
 netdev_hold(dev, ((void *)0), ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))));
}
# 4108 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_put(struct net_device *dev)
{
 netdev_put(dev, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __free_dev_put(void *p) { struct net_device * _T = *(struct net_device * *)p; if (_T) dev_put(_T); }

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_ref_replace(struct net_device *odev,
          struct net_device *ndev,
          netdevice_tracker *tracker,
          gfp_t gfp)
{
 if (odev)
  netdev_tracker_free(odev, tracker);

 __dev_hold(ndev);
 __dev_put(odev);

 if (ndev)
  __netdev_tracker_alloc(ndev, tracker, gfp);
}
# 4138 "../include/linux/netdevice.h"
void linkwatch_fire_event(struct net_device *dev);
# 4147 "../include/linux/netdevice.h"
void linkwatch_sync_dev(struct net_device *dev);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_carrier_ok(const struct net_device *dev)
{
 return !((__builtin_constant_p(__LINK_STATE_NOCARRIER) && __builtin_constant_p((uintptr_t)(&dev->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev->state))) ? const_test_bit(__LINK_STATE_NOCARRIER, &dev->state) : arch_test_bit(__LINK_STATE_NOCARRIER, &dev->state));
}

unsigned long dev_trans_start(struct net_device *dev);

void __netdev_watchdog_up(struct net_device *dev);

void netif_carrier_on(struct net_device *dev);
void netif_carrier_off(struct net_device *dev);
void netif_carrier_event(struct net_device *dev);
# 4180 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_dormant_on(struct net_device *dev)
{
 if (!test_and_set_bit(__LINK_STATE_DORMANT, &dev->state))
  linkwatch_fire_event(dev);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_dormant_off(struct net_device *dev)
{
 if (test_and_clear_bit(__LINK_STATE_DORMANT, &dev->state))
  linkwatch_fire_event(dev);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_dormant(const struct net_device *dev)
{
 return ((__builtin_constant_p(__LINK_STATE_DORMANT) && __builtin_constant_p((uintptr_t)(&dev->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev->state))) ? const_test_bit(__LINK_STATE_DORMANT, &dev->state) : arch_test_bit(__LINK_STATE_DORMANT, &dev->state));
}
# 4220 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_testing_on(struct net_device *dev)
{
 if (!test_and_set_bit(__LINK_STATE_TESTING, &dev->state))
  linkwatch_fire_event(dev);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_testing_off(struct net_device *dev)
{
 if (test_and_clear_bit(__LINK_STATE_TESTING, &dev->state))
  linkwatch_fire_event(dev);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_testing(const struct net_device *dev)
{
 return ((__builtin_constant_p(__LINK_STATE_TESTING) && __builtin_constant_p((uintptr_t)(&dev->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev->state))) ? const_test_bit(__LINK_STATE_TESTING, &dev->state) : arch_test_bit(__LINK_STATE_TESTING, &dev->state));
}
# 4256 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_oper_up(const struct net_device *dev)
{
 unsigned int operstate = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_349(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dev->operstate) == sizeof(char) || sizeof(dev->operstate) == sizeof(short) || sizeof(dev->operstate) == sizeof(int) || sizeof(dev->operstate) == sizeof(long)) || sizeof(dev->operstate) == sizeof(long long))) __compiletime_assert_349(); } while (0); (*(const volatile typeof( _Generic((dev->operstate), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dev->operstate))) *)&(dev->operstate)); });

 return operstate == IF_OPER_UP ||
  operstate == IF_OPER_UNKNOWN ;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_device_present(const struct net_device *dev)
{
 return ((__builtin_constant_p(__LINK_STATE_PRESENT) && __builtin_constant_p((uintptr_t)(&dev->state) != (uintptr_t)((void *)0)) && (uintptr_t)(&dev->state) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&dev->state))) ? const_test_bit(__LINK_STATE_PRESENT, &dev->state) : arch_test_bit(__LINK_STATE_PRESENT, &dev->state));
}

void netif_device_detach(struct net_device *dev);

void netif_device_attach(struct net_device *dev);





enum {
 NETIF_MSG_DRV_BIT,
 NETIF_MSG_PROBE_BIT,
 NETIF_MSG_LINK_BIT,
 NETIF_MSG_TIMER_BIT,
 NETIF_MSG_IFDOWN_BIT,
 NETIF_MSG_IFUP_BIT,
 NETIF_MSG_RX_ERR_BIT,
 NETIF_MSG_TX_ERR_BIT,
 NETIF_MSG_TX_QUEUED_BIT,
 NETIF_MSG_INTR_BIT,
 NETIF_MSG_TX_DONE_BIT,
 NETIF_MSG_RX_STATUS_BIT,
 NETIF_MSG_PKTDATA_BIT,
 NETIF_MSG_HW_BIT,
 NETIF_MSG_WOL_BIT,




 NETIF_MSG_CLASS_COUNT,
};

_Static_assert(NETIF_MSG_CLASS_COUNT <= 32, "NETIF_MSG_CLASS_COUNT <= 32");
# 4343 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
{

 if (debug_value < 0 || debug_value >= (sizeof(u32) * 8))
  return default_msg_enable_bits;
 if (debug_value == 0)
  return 0;

 return (1U << debug_value) - 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netif_tx_lock(struct netdev_queue *txq, int cpu)
{
 spin_lock(&txq->_xmit_lock);

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_350(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->xmit_lock_owner) == sizeof(char) || sizeof(txq->xmit_lock_owner) == sizeof(short) || sizeof(txq->xmit_lock_owner) == sizeof(int) || sizeof(txq->xmit_lock_owner) == sizeof(long)) || sizeof(txq->xmit_lock_owner) == sizeof(long long))) __compiletime_assert_350(); } while (0); do { *(volatile typeof(txq->xmit_lock_owner) *)&(txq->xmit_lock_owner) = (cpu); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __netif_tx_acquire(struct netdev_queue *txq)
{
 (void)0;
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netif_tx_release(struct netdev_queue *txq)
{
 (void)0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netif_tx_lock_bh(struct netdev_queue *txq)
{
 spin_lock_bh(&txq->_xmit_lock);

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_351(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->xmit_lock_owner) == sizeof(char) || sizeof(txq->xmit_lock_owner) == sizeof(short) || sizeof(txq->xmit_lock_owner) == sizeof(int) || sizeof(txq->xmit_lock_owner) == sizeof(long)) || sizeof(txq->xmit_lock_owner) == sizeof(long long))) __compiletime_assert_351(); } while (0); do { *(volatile typeof(txq->xmit_lock_owner) *)&(txq->xmit_lock_owner) = (0); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __netif_tx_trylock(struct netdev_queue *txq)
{
 bool ok = spin_trylock(&txq->_xmit_lock);

 if (__builtin_expect(!!(ok), 1)) {

  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_352(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->xmit_lock_owner) == sizeof(char) || sizeof(txq->xmit_lock_owner) == sizeof(short) || sizeof(txq->xmit_lock_owner) == sizeof(int) || sizeof(txq->xmit_lock_owner) == sizeof(long)) || sizeof(txq->xmit_lock_owner) == sizeof(long long))) __compiletime_assert_352(); } while (0); do { *(volatile typeof(txq->xmit_lock_owner) *)&(txq->xmit_lock_owner) = (0); } while (0); } while (0);
 }
 return ok;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netif_tx_unlock(struct netdev_queue *txq)
{

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_353(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->xmit_lock_owner) == sizeof(char) || sizeof(txq->xmit_lock_owner) == sizeof(short) || sizeof(txq->xmit_lock_owner) == sizeof(int) || sizeof(txq->xmit_lock_owner) == sizeof(long)) || sizeof(txq->xmit_lock_owner) == sizeof(long long))) __compiletime_assert_353(); } while (0); do { *(volatile typeof(txq->xmit_lock_owner) *)&(txq->xmit_lock_owner) = (-1); } while (0); } while (0);
 spin_unlock(&txq->_xmit_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __netif_tx_unlock_bh(struct netdev_queue *txq)
{

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_354(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->xmit_lock_owner) == sizeof(char) || sizeof(txq->xmit_lock_owner) == sizeof(short) || sizeof(txq->xmit_lock_owner) == sizeof(int) || sizeof(txq->xmit_lock_owner) == sizeof(long)) || sizeof(txq->xmit_lock_owner) == sizeof(long long))) __compiletime_assert_354(); } while (0); do { *(volatile typeof(txq->xmit_lock_owner) *)&(txq->xmit_lock_owner) = (-1); } while (0); } while (0);
 spin_unlock_bh(&txq->_xmit_lock);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void txq_trans_update(struct netdev_queue *txq)
{
 if (txq->xmit_lock_owner != -1)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_355(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->trans_start) == sizeof(char) || sizeof(txq->trans_start) == sizeof(short) || sizeof(txq->trans_start) == sizeof(int) || sizeof(txq->trans_start) == sizeof(long)) || sizeof(txq->trans_start) == sizeof(long long))) __compiletime_assert_355(); } while (0); do { *(volatile typeof(txq->trans_start) *)&(txq->trans_start) = (jiffies); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void txq_trans_cond_update(struct netdev_queue *txq)
{
 unsigned long now = jiffies;

 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_356(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->trans_start) == sizeof(char) || sizeof(txq->trans_start) == sizeof(short) || sizeof(txq->trans_start) == sizeof(int) || sizeof(txq->trans_start) == sizeof(long)) || sizeof(txq->trans_start) == sizeof(long long))) __compiletime_assert_356(); } while (0); (*(const volatile typeof( _Generic((txq->trans_start), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (txq->trans_start))) *)&(txq->trans_start)); }) != now)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_357(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(txq->trans_start) == sizeof(char) || sizeof(txq->trans_start) == sizeof(short) || sizeof(txq->trans_start) == sizeof(int) || sizeof(txq->trans_start) == sizeof(long)) || sizeof(txq->trans_start) == sizeof(long long))) __compiletime_assert_357(); } while (0); do { *(volatile typeof(txq->trans_start) *)&(txq->trans_start) = (now); } while (0); } while (0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_trans_update(struct net_device *dev)
{
 struct netdev_queue *txq = netdev_get_tx_queue(dev, 0);

 txq_trans_cond_update(txq);
}







void netif_tx_lock(struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_lock_bh(struct net_device *dev)
{
 local_bh_disable();
 netif_tx_lock(dev);
}

void netif_tx_unlock(struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_unlock_bh(struct net_device *dev)
{
 netif_tx_unlock(dev);
 local_bh_enable();
}
# 4472 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_tx_disable(struct net_device *dev)
{
 unsigned int i;
 int cpu;

 local_bh_disable();
 cpu = 0;
 spin_lock(&dev->tx_global_lock);
 for (i = 0; i < dev->num_tx_queues; i++) {
  struct netdev_queue *txq = netdev_get_tx_queue(dev, i);

  __netif_tx_lock(txq, cpu);
  netif_tx_stop_queue(txq);
  __netif_tx_unlock(txq);
 }
 spin_unlock(&dev->tx_global_lock);
 local_bh_enable();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_addr_lock(struct net_device *dev)
{
 unsigned char nest_level = 0;


 nest_level = dev->nested_level;

 do { _raw_spin_lock_nested(spinlock_check(&dev->addr_list_lock), nest_level); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_addr_lock_bh(struct net_device *dev)
{
 unsigned char nest_level = 0;


 nest_level = dev->nested_level;

 local_bh_disable();
 do { _raw_spin_lock_nested(spinlock_check(&dev->addr_list_lock), nest_level); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_addr_unlock(struct net_device *dev)
{
 spin_unlock(&dev->addr_list_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_addr_unlock_bh(struct net_device *dev)
{
 spin_unlock_bh(&dev->addr_list_lock);
}
# 4531 "../include/linux/netdevice.h"
void ether_setup(struct net_device *dev);


struct net_device *alloc_netdev_dummy(int sizeof_priv);


struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name,
        unsigned char name_assign_type,
        void (*setup)(struct net_device *),
        unsigned int txqs, unsigned int rxqs);







int register_netdev(struct net_device *dev);
void unregister_netdev(struct net_device *dev);

int devm_register_netdev(struct device *dev, struct net_device *ndev);


int __hw_addr_sync(struct netdev_hw_addr_list *to_list,
     struct netdev_hw_addr_list *from_list, int addr_len);
void __hw_addr_unsync(struct netdev_hw_addr_list *to_list,
        struct netdev_hw_addr_list *from_list, int addr_len);
int __hw_addr_sync_dev(struct netdev_hw_addr_list *list,
         struct net_device *dev,
         int (*sync)(struct net_device *, const unsigned char *),
         int (*unsync)(struct net_device *,
         const unsigned char *));
int __hw_addr_ref_sync_dev(struct netdev_hw_addr_list *list,
      struct net_device *dev,
      int (*sync)(struct net_device *,
           const unsigned char *, int),
      int (*unsync)(struct net_device *,
      const unsigned char *, int));
void __hw_addr_ref_unsync_dev(struct netdev_hw_addr_list *list,
         struct net_device *dev,
         int (*unsync)(struct net_device *,
         const unsigned char *, int));
void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list,
     struct net_device *dev,
     int (*unsync)(struct net_device *,
     const unsigned char *));
void __hw_addr_init(struct netdev_hw_addr_list *list);


void dev_addr_mod(struct net_device *dev, unsigned int offset,
    const void *addr, size_t len);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__dev_addr_set(struct net_device *dev, const void *addr, size_t len)
{
 dev_addr_mod(dev, 0, addr, len);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dev_addr_set(struct net_device *dev, const u8 *addr)
{
 __dev_addr_set(dev, addr, dev->addr_len);
}

int dev_addr_add(struct net_device *dev, const unsigned char *addr,
   unsigned char addr_type);
int dev_addr_del(struct net_device *dev, const unsigned char *addr,
   unsigned char addr_type);


int dev_uc_add(struct net_device *dev, const unsigned char *addr);
int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr);
int dev_uc_del(struct net_device *dev, const unsigned char *addr);
int dev_uc_sync(struct net_device *to, struct net_device *from);
int dev_uc_sync_multiple(struct net_device *to, struct net_device *from);
void dev_uc_unsync(struct net_device *to, struct net_device *from);
void dev_uc_flush(struct net_device *dev);
void dev_uc_init(struct net_device *dev);
# 4618 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __dev_uc_sync(struct net_device *dev,
    int (*sync)(struct net_device *,
         const unsigned char *),
    int (*unsync)(struct net_device *,
           const unsigned char *))
{
 return __hw_addr_sync_dev(&dev->uc, dev, sync, unsync);
}
# 4634 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dev_uc_unsync(struct net_device *dev,
       int (*unsync)(struct net_device *,
       const unsigned char *))
{
 __hw_addr_unsync_dev(&dev->uc, dev, unsync);
}


int dev_mc_add(struct net_device *dev, const unsigned char *addr);
int dev_mc_add_global(struct net_device *dev, const unsigned char *addr);
int dev_mc_add_excl(struct net_device *dev, const unsigned char *addr);
int dev_mc_del(struct net_device *dev, const unsigned char *addr);
int dev_mc_del_global(struct net_device *dev, const unsigned char *addr);
int dev_mc_sync(struct net_device *to, struct net_device *from);
int dev_mc_sync_multiple(struct net_device *to, struct net_device *from);
void dev_mc_unsync(struct net_device *to, struct net_device *from);
void dev_mc_flush(struct net_device *dev);
void dev_mc_init(struct net_device *dev);
# 4662 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __dev_mc_sync(struct net_device *dev,
    int (*sync)(struct net_device *,
         const unsigned char *),
    int (*unsync)(struct net_device *,
           const unsigned char *))
{
 return __hw_addr_sync_dev(&dev->mc, dev, sync, unsync);
}
# 4678 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dev_mc_unsync(struct net_device *dev,
       int (*unsync)(struct net_device *,
       const unsigned char *))
{
 __hw_addr_unsync_dev(&dev->mc, dev, unsync);
}


void dev_set_rx_mode(struct net_device *dev);
int dev_set_promiscuity(struct net_device *dev, int inc);
int dev_set_allmulti(struct net_device *dev, int inc);
void netdev_state_change(struct net_device *dev);
void __netdev_notify_peers(struct net_device *dev);
void netdev_notify_peers(struct net_device *dev);
void netdev_features_change(struct net_device *dev);

void dev_load(struct net *net, const char *name);
struct rtnl_link_stats64 *dev_get_stats(struct net_device *dev,
     struct rtnl_link_stats64 *storage);
void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
        const struct net_device_stats *netdev_stats);
void dev_fetch_sw_netstats(struct rtnl_link_stats64 *s,
      const struct pcpu_sw_netstats *netstats);
void dev_get_tstats64(struct net_device *dev, struct rtnl_link_stats64 *s);

enum {
 NESTED_SYNC_IMM_BIT,
 NESTED_SYNC_TODO_BIT,
};







struct netdev_nested_priv {
 unsigned char flags;
 void *data;
};

bool netdev_has_upper_dev(struct net_device *dev, struct net_device *upper_dev);
struct net_device *netdev_upper_get_next_dev_rcu(struct net_device *dev,
           struct list_head **iter);
# 4730 "../include/linux/netdevice.h"
int netdev_walk_all_upper_dev_rcu(struct net_device *dev,
      int (*fn)(struct net_device *upper_dev,
         struct netdev_nested_priv *priv),
      struct netdev_nested_priv *priv);

bool netdev_has_upper_dev_all_rcu(struct net_device *dev,
      struct net_device *upper_dev);

bool netdev_has_any_upper_dev(struct net_device *dev);

void *netdev_lower_get_next_private(struct net_device *dev,
        struct list_head **iter);
void *netdev_lower_get_next_private_rcu(struct net_device *dev,
     struct list_head **iter);
# 4757 "../include/linux/netdevice.h"
void *netdev_lower_get_next(struct net_device *dev,
    struct list_head **iter);







struct net_device *netdev_next_lower_dev_rcu(struct net_device *dev,
          struct list_head **iter);
int netdev_walk_all_lower_dev(struct net_device *dev,
         int (*fn)(struct net_device *lower_dev,
     struct netdev_nested_priv *priv),
         struct netdev_nested_priv *priv);
int netdev_walk_all_lower_dev_rcu(struct net_device *dev,
      int (*fn)(struct net_device *lower_dev,
         struct netdev_nested_priv *priv),
      struct netdev_nested_priv *priv);

void *netdev_adjacent_get_private(struct list_head *adj_list);
void *netdev_lower_get_first_private_rcu(struct net_device *dev);
struct net_device *netdev_master_upper_dev_get(struct net_device *dev);
struct net_device *netdev_master_upper_dev_get_rcu(struct net_device *dev);
int netdev_upper_dev_link(struct net_device *dev, struct net_device *upper_dev,
     struct netlink_ext_ack *extack);
int netdev_master_upper_dev_link(struct net_device *dev,
     struct net_device *upper_dev,
     void *upper_priv, void *upper_info,
     struct netlink_ext_ack *extack);
void netdev_upper_dev_unlink(struct net_device *dev,
        struct net_device *upper_dev);
int netdev_adjacent_change_prepare(struct net_device *old_dev,
       struct net_device *new_dev,
       struct net_device *dev,
       struct netlink_ext_ack *extack);
void netdev_adjacent_change_commit(struct net_device *old_dev,
       struct net_device *new_dev,
       struct net_device *dev);
void netdev_adjacent_change_abort(struct net_device *old_dev,
      struct net_device *new_dev,
      struct net_device *dev);
void netdev_adjacent_rename_links(struct net_device *dev, char *oldname);
void *netdev_lower_dev_get_private(struct net_device *dev,
       struct net_device *lower_dev);
void netdev_lower_state_changed(struct net_device *lower_dev,
    void *lower_state_info);



extern u8 netdev_rss_key[52] ;
void netdev_rss_key_fill(void *buffer, size_t len);

int skb_checksum_help(struct sk_buff *skb);
int skb_crc32c_csum_help(struct sk_buff *skb);
int skb_csum_hwoffload_help(struct sk_buff *skb,
       const netdev_features_t features);

struct netdev_bonding_info {
 ifslave slave;
 ifbond master;
};

struct netdev_notifier_bonding_info {
 struct netdev_notifier_info info;
 struct netdev_bonding_info bonding_info;
};

void netdev_bonding_info_change(struct net_device *dev,
    struct netdev_bonding_info *bonding_info);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ethtool_notify(struct net_device *dev, unsigned int cmd,
      const void *data)
{
}


__be16 skb_network_protocol(struct sk_buff *skb, int *depth);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool can_checksum_protocol(netdev_features_t features,
      __be16 protocol)
{
 if (protocol == (( __be16)(__u16)(__builtin_constant_p((0x8906)) ? ((__u16)( (((__u16)((0x8906)) & (__u16)0x00ffU) << 8) | (((__u16)((0x8906)) & (__u16)0xff00U) >> 8))) : __fswab16((0x8906)))))
  return !!(features & ((netdev_features_t)1 << (NETIF_F_FCOE_CRC_BIT)));



 if (features & ((netdev_features_t)1 << (NETIF_F_HW_CSUM_BIT))) {

  return true;
 }

 switch (protocol) {
 case (( __be16)(__u16)(__builtin_constant_p((0x0800)) ? ((__u16)( (((__u16)((0x0800)) & (__u16)0x00ffU) << 8) | (((__u16)((0x0800)) & (__u16)0xff00U) >> 8))) : __fswab16((0x0800)))):
  return !!(features & ((netdev_features_t)1 << (NETIF_F_IP_CSUM_BIT)));
 case (( __be16)(__u16)(__builtin_constant_p((0x86DD)) ? ((__u16)( (((__u16)((0x86DD)) & (__u16)0x00ffU) << 8) | (((__u16)((0x86DD)) & (__u16)0xff00U) >> 8))) : __fswab16((0x86DD)))):
  return !!(features & ((netdev_features_t)1 << (NETIF_F_IPV6_CSUM_BIT)));
 default:
  return false;
 }
}


void netdev_rx_csum_fault(struct net_device *dev, struct sk_buff *skb);







void net_enable_timestamp(void);
void net_disable_timestamp(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t netdev_get_tstamp(struct net_device *dev,
     const struct skb_shared_hwtstamps *hwtstamps,
     bool cycles)
{
 const struct net_device_ops *ops = dev->netdev_ops;

 if (ops->ndo_get_tstamp)
  return ops->ndo_get_tstamp(dev, hwtstamps, cycles);

 return hwtstamps->hwtstamp;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netdev_xmit_set_more(bool more)
{
 ({ __this_cpu_preempt_check("write"); do { do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(softnet_data.xmit.more)) { case 1: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }) = more; } while (0);break; case 2: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }) = more; } while (0);break; case 4: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }) = more; } while (0);break; case 8: do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }) = more; } while (0);break; default: __bad_size_call_parameter();break; } } while (0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netdev_xmit_more(void)
{
 return ({ __this_cpu_preempt_check("read"); ({ typeof(softnet_data.xmit.more) pscr_ret__; do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(softnet_data.xmit.more)) { case 1: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }); }); break; case 2: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }); }); break; case 4: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }); }); break; case 8: pscr_ret__ = ({ *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(softnet_data.xmit.more)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(softnet_data.xmit.more))) *)(&(softnet_data.xmit.more)); }); }); }); break; default: __bad_size_call_parameter(); break; } pscr_ret__; }); });
}
# 4908 "../include/linux/netdevice.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netdev_tx_t __netdev_start_xmit(const struct net_device_ops *ops,
           struct sk_buff *skb, struct net_device *dev,
           bool more)
{
 netdev_xmit_set_more(more);
 return ops->ndo_start_xmit(skb, dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netdev_tx_t netdev_start_xmit(struct sk_buff *skb, struct net_device *dev,
         struct netdev_queue *txq, bool more)
{
 const struct net_device_ops *ops = dev->netdev_ops;
 netdev_tx_t rc;

 rc = __netdev_start_xmit(ops, skb, dev, more);
 if (rc == NETDEV_TX_OK)
  txq_trans_update(txq);

 return rc;
}

int netdev_class_create_file_ns(const struct class_attribute *class_attr,
    const void *ns);
void netdev_class_remove_file_ns(const struct class_attribute *class_attr,
     const void *ns);

extern const struct kobj_ns_type_operations net_ns_type_operations;

const char *netdev_drivername(const struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netdev_features_t netdev_intersect_features(netdev_features_t f1,
         netdev_features_t f2)
{
 if ((f1 ^ f2) & ((netdev_features_t)1 << (NETIF_F_HW_CSUM_BIT))) {
  if (f1 & ((netdev_features_t)1 << (NETIF_F_HW_CSUM_BIT)))
   f1 |= (((netdev_features_t)1 << (NETIF_F_IP_CSUM_BIT))|((netdev_features_t)1 << (NETIF_F_IPV6_CSUM_BIT)));
  else
   f2 |= (((netdev_features_t)1 << (NETIF_F_IP_CSUM_BIT))|((netdev_features_t)1 << (NETIF_F_IPV6_CSUM_BIT)));
 }

 return f1 & f2;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netdev_features_t netdev_get_wanted_features(
 struct net_device *dev)
{
 return (dev->features & ~dev->hw_features) | dev->wanted_features;
}
netdev_features_t netdev_increment_features(netdev_features_t all,
 netdev_features_t one, netdev_features_t mask);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) netdev_features_t netdev_add_tso_features(netdev_features_t features,
       netdev_features_t mask)
{
 return netdev_increment_features(features, (((netdev_features_t)1 << (NETIF_F_TSO_BIT)) | ((netdev_features_t)1 << (NETIF_F_TSO6_BIT)) | ((netdev_features_t)1 << (NETIF_F_TSO_ECN_BIT)) | ((netdev_features_t)1 << (NETIF_F_TSO_MANGLEID_BIT))), mask);
}

int __netdev_update_features(struct net_device *dev);
void netdev_update_features(struct net_device *dev);
void netdev_change_features(struct net_device *dev);

void netif_stacked_transfer_operstate(const struct net_device *rootdev,
     struct net_device *dev);

netdev_features_t passthru_features_check(struct sk_buff *skb,
       struct net_device *dev,
       netdev_features_t features);
netdev_features_t netif_skb_features(struct sk_buff *skb);
void skb_warn_bad_offload(const struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool net_gso_ok(netdev_features_t features, int gso_type)
{
 netdev_features_t feature = (netdev_features_t)gso_type << NETIF_F_GSO_SHIFT;


 do { __attribute__((__noreturn__)) extern void __compiletime_assert_358(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_TCPV4 != (NETIF_F_TSO >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_TCPV4 != (((netdev_features_t)1 << (NETIF_F_TSO_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_358(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_359(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_DODGY != (NETIF_F_GSO_ROBUST >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_DODGY != (((netdev_features_t)1 << (NETIF_F_GSO_ROBUST_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_359(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_360(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_TCP_ECN != (NETIF_F_TSO_ECN >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_TCP_ECN != (((netdev_features_t)1 << (NETIF_F_TSO_ECN_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_360(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_361(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_TCP_FIXEDID != (NETIF_F_TSO_MANGLEID >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_TCP_FIXEDID != (((netdev_features_t)1 << (NETIF_F_TSO_MANGLEID_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_361(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_362(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_TCPV6 != (NETIF_F_TSO6 >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_TCPV6 != (((netdev_features_t)1 << (NETIF_F_TSO6_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_362(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_363(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_FCOE != (NETIF_F_FSO >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_FCOE != (((netdev_features_t)1 << (NETIF_F_FSO_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_363(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_364(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_GRE != (NETIF_F_GSO_GRE >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_GRE != (((netdev_features_t)1 << (NETIF_F_GSO_GRE_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_364(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_365(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_GRE_CSUM != (NETIF_F_GSO_GRE_CSUM >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_GRE_CSUM != (((netdev_features_t)1 << (NETIF_F_GSO_GRE_CSUM_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_365(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_366(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_IPXIP4 != (NETIF_F_GSO_IPXIP4 >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_IPXIP4 != (((netdev_features_t)1 << (NETIF_F_GSO_IPXIP4_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_366(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_367(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_IPXIP6 != (NETIF_F_GSO_IPXIP6 >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_IPXIP6 != (((netdev_features_t)1 << (NETIF_F_GSO_IPXIP6_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_367(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_368(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_UDP_TUNNEL != (NETIF_F_GSO_UDP_TUNNEL >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_UDP_TUNNEL != (((netdev_features_t)1 << (NETIF_F_GSO_UDP_TUNNEL_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_368(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_369(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_UDP_TUNNEL_CSUM != (NETIF_F_GSO_UDP_TUNNEL_CSUM >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_UDP_TUNNEL_CSUM != (((netdev_features_t)1 << (NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_369(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_370(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_PARTIAL != (NETIF_F_GSO_PARTIAL >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_PARTIAL != (((netdev_features_t)1 << (NETIF_F_GSO_PARTIAL_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_370(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_371(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_TUNNEL_REMCSUM != (NETIF_F_GSO_TUNNEL_REMCSUM >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_TUNNEL_REMCSUM != (((netdev_features_t)1 << (NETIF_F_GSO_TUNNEL_REMCSUM_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_371(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_372(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_SCTP != (NETIF_F_GSO_SCTP >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_SCTP != (((netdev_features_t)1 << (NETIF_F_GSO_SCTP_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_372(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_373(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_ESP != (NETIF_F_GSO_ESP >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_ESP != (((netdev_features_t)1 << (NETIF_F_GSO_ESP_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_373(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_374(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_UDP != (NETIF_F_GSO_UDP >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_UDP != (((netdev_features_t)1 << (NETIF_F_GSO_UDP_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_374(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_375(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_UDP_L4 != (NETIF_F_GSO_UDP_L4 >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_UDP_L4 != (((netdev_features_t)1 << (NETIF_F_GSO_UDP_L4_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_375(); } while (0);
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_376(void) __attribute__((__error__("BUILD_BUG_ON failed: " "SKB_GSO_FRAGLIST != (NETIF_F_GSO_FRAGLIST >> NETIF_F_GSO_SHIFT)"))); if (!(!(SKB_GSO_FRAGLIST != (((netdev_features_t)1 << (NETIF_F_GSO_FRAGLIST_BIT)) >> NETIF_F_GSO_SHIFT)))) __compiletime_assert_376(); } while (0);

 return (features & feature) == feature;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_gso_ok(struct sk_buff *skb, netdev_features_t features)
{
 return net_gso_ok(features, ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type) &&
        (!skb_has_frag_list(skb) || (features & ((netdev_features_t)1 << (NETIF_F_FRAGLIST_BIT))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_needs_gso(struct sk_buff *skb,
       netdev_features_t features)
{
 return skb_is_gso(skb) && (!skb_gso_ok(skb, features) ||
  __builtin_expect(!!((skb->ip_summed != 3) && (skb->ip_summed != 1)), 0));

}

void netif_set_tso_max_size(struct net_device *dev, unsigned int size);
void netif_set_tso_max_segs(struct net_device *dev, unsigned int segs);
void netif_inherit_tso_max(struct net_device *to,
      const struct net_device *from);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_macsec(const struct net_device *dev)
{
 return dev->priv_flags & IFF_MACSEC;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_macvlan(const struct net_device *dev)
{
 return dev->priv_flags & IFF_MACVLAN;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_macvlan_port(const struct net_device *dev)
{
 return dev->priv_flags & IFF_MACVLAN_PORT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_bond_master(const struct net_device *dev)
{
 return dev->flags & IFF_MASTER && dev->priv_flags & IFF_BONDING;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_bond_slave(const struct net_device *dev)
{
 return dev->flags & IFF_SLAVE && dev->priv_flags & IFF_BONDING;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_supports_nofcs(struct net_device *dev)
{
 return dev->priv_flags & IFF_SUPP_NOFCS;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_has_l3_rx_handler(const struct net_device *dev)
{
 return dev->priv_flags & IFF_L3MDEV_RX_HANDLER;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_l3_master(const struct net_device *dev)
{
 return dev->priv_flags & IFF_L3MDEV_MASTER;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_l3_slave(const struct net_device *dev)
{
 return dev->priv_flags & IFF_L3MDEV_SLAVE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dev_sdif(const struct net_device *dev)
{




 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_bridge_master(const struct net_device *dev)
{
 return dev->priv_flags & IFF_EBRIDGE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_bridge_port(const struct net_device *dev)
{
 return dev->priv_flags & IFF_BRIDGE_PORT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_ovs_master(const struct net_device *dev)
{
 return dev->priv_flags & IFF_OPENVSWITCH;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_ovs_port(const struct net_device *dev)
{
 return dev->priv_flags & IFF_OVS_DATAPATH;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_any_bridge_master(const struct net_device *dev)
{
 return netif_is_bridge_master(dev) || netif_is_ovs_master(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_any_bridge_port(const struct net_device *dev)
{
 return netif_is_bridge_port(dev) || netif_is_ovs_port(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_team_master(const struct net_device *dev)
{
 return dev->priv_flags & IFF_TEAM;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_team_port(const struct net_device *dev)
{
 return dev->priv_flags & IFF_TEAM_PORT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_lag_master(const struct net_device *dev)
{
 return netif_is_bond_master(dev) || netif_is_team_master(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_lag_port(const struct net_device *dev)
{
 return netif_is_bond_slave(dev) || netif_is_team_port(dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_rxfh_configured(const struct net_device *dev)
{
 return dev->priv_flags & IFF_RXFH_CONFIGURED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_failover(const struct net_device *dev)
{
 return dev->priv_flags & IFF_FAILOVER;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_is_failover_slave(const struct net_device *dev)
{
 return dev->priv_flags & IFF_FAILOVER_SLAVE;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void netif_keep_dst(struct net_device *dev)
{
 dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_XMIT_DST_RELEASE_PERM);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_reduces_vlan_mtu(struct net_device *dev)
{

 return netif_is_macsec(dev);
}

extern struct pernet_operations loopback_net_ops;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *netdev_name(const struct net_device *dev)
{
 if (!dev->name[0] || strchr(dev->name, '%'))
  return "(unnamed net_device)";
 return dev->name;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const char *netdev_reg_state(const struct net_device *dev)
{
 u8 reg_state = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_377(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dev->reg_state) == sizeof(char) || sizeof(dev->reg_state) == sizeof(short) || sizeof(dev->reg_state) == sizeof(int) || sizeof(dev->reg_state) == sizeof(long)) || sizeof(dev->reg_state) == sizeof(long long))) __compiletime_assert_377(); } while (0); (*(const volatile typeof( _Generic((dev->reg_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dev->reg_state))) *)&(dev->reg_state)); });

 switch (reg_state) {
 case NETREG_UNINITIALIZED: return " (uninitialized)";
 case NETREG_REGISTERED: return "";
 case NETREG_UNREGISTERING: return " (unregistering)";
 case NETREG_UNREGISTERED: return " (unregistered)";
 case NETREG_RELEASED: return " (released)";
 case NETREG_DUMMY: return " (dummy)";
 }

 ({ bool __ret_do_once = !!(1); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/netdevice.h", 5187, 9, "%s: unknown reg_state %d\n", dev->name, reg_state); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return " (unknown)";
}
# 5229 "../include/linux/netdevice.h"
extern struct list_head ptype_base[(16)] ;

extern struct net_device *blackhole_netdev;
# 47 "../include/net/sock.h" 2







# 1 "../include/linux/static_key.h" 1
# 55 "../include/net/sock.h" 2




# 1 "../include/linux/rculist_nulls.h" 1
# 33 "../include/linux/rculist_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
{
 if (!hlist_nulls_unhashed(n)) {
  __hlist_nulls_del(n);
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_378(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_378(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (((void *)0)); } while (0); } while (0);
 }
}
# 74 "../include/linux/rculist_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_del_rcu(struct hlist_nulls_node *n)
{
 __hlist_nulls_del(n);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_379(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_379(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (((void *) 0x122 + 0)); } while (0); } while (0);
}
# 99 "../include/linux/rculist_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n,
     struct hlist_nulls_head *h)
{
 struct hlist_nulls_node *first = h->first;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_380(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next) == sizeof(char) || sizeof(n->next) == sizeof(short) || sizeof(n->next) == sizeof(int) || sizeof(n->next) == sizeof(long)) || sizeof(n->next) == sizeof(long long))) __compiletime_assert_380(); } while (0); do { *(volatile typeof(n->next) *)&(n->next) = (first); } while (0); } while (0);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_381(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->pprev) == sizeof(char) || sizeof(n->pprev) == sizeof(short) || sizeof(n->pprev) == sizeof(int) || sizeof(n->pprev) == sizeof(long)) || sizeof(n->pprev) == sizeof(long long))) __compiletime_assert_381(); } while (0); do { *(volatile typeof(n->pprev) *)&(n->pprev) = (&h->first); } while (0); } while (0);
 do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_382(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_nulls_node **)&(h)->first)))) == sizeof(char) || sizeof(((*((struct hlist_nulls_node **)&(h)->first)))) == sizeof(short) || sizeof(((*((struct hlist_nulls_node **)&(h)->first)))) == sizeof(int) || sizeof(((*((struct hlist_nulls_node **)&(h)->first)))) == sizeof(long)) || sizeof(((*((struct hlist_nulls_node **)&(h)->first)))) == sizeof(long long))) __compiletime_assert_382(); } while (0); do { *(volatile typeof(((*((struct hlist_nulls_node **)&(h)->first)))) *)&(((*((struct hlist_nulls_node **)&(h)->first)))) = ((typeof((*((struct hlist_nulls_node **)&(h)->first))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_383(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_nulls_node **)&(h)->first))) == sizeof(char) || sizeof(*&(*((struct hlist_nulls_node **)&(h)->first))) == sizeof(short) || sizeof(*&(*((struct hlist_nulls_node **)&(h)->first))) == sizeof(int) || sizeof(*&(*((struct hlist_nulls_node **)&(h)->first))) == sizeof(long)) || sizeof(*&(*((struct hlist_nulls_node **)&(h)->first))) == sizeof(long long))) __compiletime_assert_383(); } while (0); do { *(volatile typeof(*&(*((struct hlist_nulls_node **)&(h)->first))) *)&(*&(*((struct hlist_nulls_node **)&(h)->first))) = ((typeof(*((typeof((*((struct hlist_nulls_node **)&(h)->first))))_r_a_p__v)) *)((typeof((*((struct hlist_nulls_node **)&(h)->first))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 if (!is_a_nulls(first))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_384(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(first->pprev) == sizeof(char) || sizeof(first->pprev) == sizeof(short) || sizeof(first->pprev) == sizeof(int) || sizeof(first->pprev) == sizeof(long)) || sizeof(first->pprev) == sizeof(long long))) __compiletime_assert_384(); } while (0); do { *(volatile typeof(first->pprev) *)&(first->pprev) = (&n->next); } while (0); } while (0);
}
# 130 "../include/linux/rculist_nulls.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
         struct hlist_nulls_head *h)
{
 struct hlist_nulls_node *i, *last = ((void *)0);


 for (i = h->first; !is_a_nulls(i); i = i->next)
  last = i;

 if (last) {
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_385(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->next) == sizeof(char) || sizeof(n->next) == sizeof(short) || sizeof(n->next) == sizeof(int) || sizeof(n->next) == sizeof(long)) || sizeof(n->next) == sizeof(long long))) __compiletime_assert_385(); } while (0); do { *(volatile typeof(n->next) *)&(n->next) = (last->next); } while (0); } while (0);
  n->pprev = &last->next;
  do { uintptr_t _r_a_p__v = (uintptr_t)(n); ; if (__builtin_constant_p(n) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_386(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_nulls_node **)&(last)->next)))) == sizeof(char) || sizeof(((*((struct hlist_nulls_node **)&(last)->next)))) == sizeof(short) || sizeof(((*((struct hlist_nulls_node **)&(last)->next)))) == sizeof(int) || sizeof(((*((struct hlist_nulls_node **)&(last)->next)))) == sizeof(long)) || sizeof(((*((struct hlist_nulls_node **)&(last)->next)))) == sizeof(long long))) __compiletime_assert_386(); } while (0); do { *(volatile typeof(((*((struct hlist_nulls_node **)&(last)->next)))) *)&(((*((struct hlist_nulls_node **)&(last)->next)))) = ((typeof((*((struct hlist_nulls_node **)&(last)->next))))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_387(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&(*((struct hlist_nulls_node **)&(last)->next))) == sizeof(char) || sizeof(*&(*((struct hlist_nulls_node **)&(last)->next))) == sizeof(short) || sizeof(*&(*((struct hlist_nulls_node **)&(last)->next))) == sizeof(int) || sizeof(*&(*((struct hlist_nulls_node **)&(last)->next))) == sizeof(long)) || sizeof(*&(*((struct hlist_nulls_node **)&(last)->next))) == sizeof(long long))) __compiletime_assert_387(); } while (0); do { *(volatile typeof(*&(*((struct hlist_nulls_node **)&(last)->next))) *)&(*&(*((struct hlist_nulls_node **)&(last)->next))) = ((typeof(*((typeof((*((struct hlist_nulls_node **)&(last)->next))))_r_a_p__v)) *)((typeof((*((struct hlist_nulls_node **)&(last)->next))))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 } else {
  hlist_nulls_add_head_rcu(n, h);
 }
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hlist_nulls_add_fake(struct hlist_nulls_node *n)
{
 n->pprev = &n->next;
 n->next = (struct hlist_nulls_node *)(1UL | (((long)((void *)0)) << 1));
}
# 60 "../include/net/sock.h" 2
# 1 "../include/linux/poll.h" 1
# 12 "../include/linux/poll.h"
# 1 "../include/uapi/linux/poll.h" 1
# 1 "./arch/hexagon/include/generated/uapi/asm/poll.h" 1
# 1 "../include/uapi/asm-generic/poll.h" 1
# 36 "../include/uapi/asm-generic/poll.h"
struct pollfd {
 int fd;
 short events;
 short revents;
};
# 2 "./arch/hexagon/include/generated/uapi/asm/poll.h" 2
# 2 "../include/uapi/linux/poll.h" 2
# 13 "../include/linux/poll.h" 2
# 1 "../include/uapi/linux/eventpoll.h" 1
# 83 "../include/uapi/linux/eventpoll.h"
struct epoll_event {
 __poll_t events;
 __u64 data;
} ;

struct epoll_params {
 __u32 busy_poll_usecs;
 __u16 busy_poll_budget;
 __u8 prefer_busy_poll;


 __u8 __pad;
};
# 14 "../include/linux/poll.h" 2
# 26 "../include/linux/poll.h"
struct poll_table_struct;




typedef void (*poll_queue_proc)(struct file *, wait_queue_head_t *, struct poll_table_struct *);





typedef struct poll_table_struct {
 poll_queue_proc _qproc;
 __poll_t _key;
} poll_table;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p)
{
 if (p && p->_qproc && wait_address)
  p->_qproc(filp, wait_address, p);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool poll_does_not_wait(const poll_table *p)
{
 return p == ((void *)0) || p->_qproc == ((void *)0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __poll_t poll_requested_events(const poll_table *p)
{
 return p ? p->_key : ~(__poll_t)0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void init_poll_funcptr(poll_table *pt, poll_queue_proc qproc)
{
 pt->_qproc = qproc;
 pt->_key = ~(__poll_t)0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool file_can_poll(struct file *file)
{
 return file->f_op->poll;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt)
{
 if (__builtin_expect(!!(!file->f_op->poll), 0))
  return (( __poll_t)0x00000001 | ( __poll_t)0x00000004 | ( __poll_t)0x00000040 | ( __poll_t)0x00000100);
 return file->f_op->poll(file, pt);
}

struct poll_table_entry {
 struct file *filp;
 __poll_t key;
 wait_queue_entry_t wait;
 wait_queue_head_t *wait_address;
};




struct poll_wqueues {
 poll_table pt;
 struct poll_table_page *table;
 struct task_struct *polling_task;
 int triggered;
 int error;
 int inline_index;
 struct poll_table_entry inline_entries[((832 - 256) / sizeof(struct poll_table_entry))];
};

extern void poll_initwait(struct poll_wqueues *pwq);
extern void poll_freewait(struct poll_wqueues *pwq);
extern u64 select_estimate_accuracy(struct timespec64 *tv);



extern int core_sys_select(int n, fd_set *inp, fd_set *outp,
      fd_set *exp, struct timespec64 *end_time);

extern int poll_select_set_timeout(struct timespec64 *to, time64_t sec,
       long nsec);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u16 mangle_poll(__poll_t val)
{
 __u16 v = ( __u16)val;

 return (( __u16)( __poll_t)0x00000001 < 0x0001 ? (v & ( __u16)( __poll_t)0x00000001) * (0x0001/( __u16)( __poll_t)0x00000001) : (v & ( __u16)( __poll_t)0x00000001) / (( __u16)( __poll_t)0x00000001/0x0001)) | (( __u16)( __poll_t)0x00000004 < 0x0004 ? (v & ( __u16)( __poll_t)0x00000004) * (0x0004/( __u16)( __poll_t)0x00000004) : (v & ( __u16)( __poll_t)0x00000004) / (( __u16)( __poll_t)0x00000004/0x0004)) | (( __u16)( __poll_t)0x00000002 < 0x0002 ? (v & ( __u16)( __poll_t)0x00000002) * (0x0002/( __u16)( __poll_t)0x00000002) : (v & ( __u16)( __poll_t)0x00000002) / (( __u16)( __poll_t)0x00000002/0x0002)) | (( __u16)( __poll_t)0x00000008 < 0x0008 ? (v & ( __u16)( __poll_t)0x00000008) * (0x0008/( __u16)( __poll_t)0x00000008) : (v & ( __u16)( __poll_t)0x00000008) / (( __u16)( __poll_t)0x00000008/0x0008)) | (( __u16)( __poll_t)0x00000020 < 0x0020 ? (v & ( __u16)( __poll_t)0x00000020) * (0x0020/( __u16)( __poll_t)0x00000020) : (v & ( __u16)( __poll_t)0x00000020) / (( __u16)( __poll_t)0x00000020/0x0020)) |
  (( __u16)( __poll_t)0x00000040 < 0x0040 ? (v & ( __u16)( __poll_t)0x00000040) * (0x0040/( __u16)( __poll_t)0x00000040) : (v & ( __u16)( __poll_t)0x00000040) / (( __u16)( __poll_t)0x00000040/0x0040)) | (( __u16)( __poll_t)0x00000080 < 0x0080 ? (v & ( __u16)( __poll_t)0x00000080) * (0x0080/( __u16)( __poll_t)0x00000080) : (v & ( __u16)( __poll_t)0x00000080) / (( __u16)( __poll_t)0x00000080/0x0080)) | (( __u16)( __poll_t)0x00000100 < 0x0100 ? (v & ( __u16)( __poll_t)0x00000100) * (0x0100/( __u16)( __poll_t)0x00000100) : (v & ( __u16)( __poll_t)0x00000100) / (( __u16)( __poll_t)0x00000100/0x0100)) | (( __u16)( __poll_t)0x00000200 < 0x0200 ? (v & ( __u16)( __poll_t)0x00000200) * (0x0200/( __u16)( __poll_t)0x00000200) : (v & ( __u16)( __poll_t)0x00000200) / (( __u16)( __poll_t)0x00000200/0x0200)) |
  (( __u16)( __poll_t)0x00000010 < 0x0010 ? (v & ( __u16)( __poll_t)0x00000010) * (0x0010/( __u16)( __poll_t)0x00000010) : (v & ( __u16)( __poll_t)0x00000010) / (( __u16)( __poll_t)0x00000010/0x0010)) | (( __u16)( __poll_t)0x00002000 < 0x2000 ? (v & ( __u16)( __poll_t)0x00002000) * (0x2000/( __u16)( __poll_t)0x00002000) : (v & ( __u16)( __poll_t)0x00002000) / (( __u16)( __poll_t)0x00002000/0x2000)) | (( __u16)( __poll_t)0x00000400 < 0x0400 ? (v & ( __u16)( __poll_t)0x00000400) * (0x0400/( __u16)( __poll_t)0x00000400) : (v & ( __u16)( __poll_t)0x00000400) / (( __u16)( __poll_t)0x00000400/0x0400));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __poll_t demangle_poll(u16 val)
{

 return ( __poll_t)(0x0001 < ( __u16)( __poll_t)0x00000001 ? (val & 0x0001) * (( __u16)( __poll_t)0x00000001/0x0001) : (val & 0x0001) / (0x0001/( __u16)( __poll_t)0x00000001)) | ( __poll_t)(0x0004 < ( __u16)( __poll_t)0x00000004 ? (val & 0x0004) * (( __u16)( __poll_t)0x00000004/0x0004) : (val & 0x0004) / (0x0004/( __u16)( __poll_t)0x00000004)) | ( __poll_t)(0x0002 < ( __u16)( __poll_t)0x00000002 ? (val & 0x0002) * (( __u16)( __poll_t)0x00000002/0x0002) : (val & 0x0002) / (0x0002/( __u16)( __poll_t)0x00000002)) | ( __poll_t)(0x0008 < ( __u16)( __poll_t)0x00000008 ? (val & 0x0008) * (( __u16)( __poll_t)0x00000008/0x0008) : (val & 0x0008) / (0x0008/( __u16)( __poll_t)0x00000008)) | ( __poll_t)(0x0020 < ( __u16)( __poll_t)0x00000020 ? (val & 0x0020) * (( __u16)( __poll_t)0x00000020/0x0020) : (val & 0x0020) / (0x0020/( __u16)( __poll_t)0x00000020)) |
  ( __poll_t)(0x0040 < ( __u16)( __poll_t)0x00000040 ? (val & 0x0040) * (( __u16)( __poll_t)0x00000040/0x0040) : (val & 0x0040) / (0x0040/( __u16)( __poll_t)0x00000040)) | ( __poll_t)(0x0080 < ( __u16)( __poll_t)0x00000080 ? (val & 0x0080) * (( __u16)( __poll_t)0x00000080/0x0080) : (val & 0x0080) / (0x0080/( __u16)( __poll_t)0x00000080)) | ( __poll_t)(0x0100 < ( __u16)( __poll_t)0x00000100 ? (val & 0x0100) * (( __u16)( __poll_t)0x00000100/0x0100) : (val & 0x0100) / (0x0100/( __u16)( __poll_t)0x00000100)) | ( __poll_t)(0x0200 < ( __u16)( __poll_t)0x00000200 ? (val & 0x0200) * (( __u16)( __poll_t)0x00000200/0x0200) : (val & 0x0200) / (0x0200/( __u16)( __poll_t)0x00000200)) |
  ( __poll_t)(0x0010 < ( __u16)( __poll_t)0x00000010 ? (val & 0x0010) * (( __u16)( __poll_t)0x00000010/0x0010) : (val & 0x0010) / (0x0010/( __u16)( __poll_t)0x00000010)) | ( __poll_t)(0x2000 < ( __u16)( __poll_t)0x00002000 ? (val & 0x2000) * (( __u16)( __poll_t)0x00002000/0x2000) : (val & 0x2000) / (0x2000/( __u16)( __poll_t)0x00002000)) | ( __poll_t)(0x0400 < ( __u16)( __poll_t)0x00000400 ? (val & 0x0400) * (( __u16)( __poll_t)0x00000400/0x0400) : (val & 0x0400) / (0x0400/( __u16)( __poll_t)0x00000400));

}
# 61 "../include/net/sock.h" 2

# 1 "../include/linux/indirect_call_wrapper.h" 1
# 63 "../include/net/sock.h" 2



# 1 "../include/net/dst.h" 1
# 14 "../include/net/dst.h"
# 1 "../include/linux/rtnetlink.h" 1
# 11 "../include/linux/rtnetlink.h"
# 1 "../include/uapi/linux/rtnetlink.h" 1







# 1 "../include/uapi/linux/if_addr.h" 1







struct ifaddrmsg {
 __u8 ifa_family;
 __u8 ifa_prefixlen;
 __u8 ifa_flags;
 __u8 ifa_scope;
 __u32 ifa_index;
};
# 26 "../include/uapi/linux/if_addr.h"
enum {
 IFA_UNSPEC,
 IFA_ADDRESS,
 IFA_LOCAL,
 IFA_LABEL,
 IFA_BROADCAST,
 IFA_ANYCAST,
 IFA_CACHEINFO,
 IFA_MULTICAST,
 IFA_FLAGS,
 IFA_RT_PRIORITY,
 IFA_TARGET_NETNSID,
 IFA_PROTO,
 __IFA_MAX,
};
# 60 "../include/uapi/linux/if_addr.h"
struct ifa_cacheinfo {
 __u32 ifa_prefered;
 __u32 ifa_valid;
 __u32 cstamp;
 __u32 tstamp;
};
# 9 "../include/uapi/linux/rtnetlink.h" 2
# 24 "../include/uapi/linux/rtnetlink.h"
enum {
 RTM_BASE = 16,


 RTM_NEWLINK = 16,

 RTM_DELLINK,

 RTM_GETLINK,

 RTM_SETLINK,


 RTM_NEWADDR = 20,

 RTM_DELADDR,

 RTM_GETADDR,


 RTM_NEWROUTE = 24,

 RTM_DELROUTE,

 RTM_GETROUTE,


 RTM_NEWNEIGH = 28,

 RTM_DELNEIGH,

 RTM_GETNEIGH,


 RTM_NEWRULE = 32,

 RTM_DELRULE,

 RTM_GETRULE,


 RTM_NEWQDISC = 36,

 RTM_DELQDISC,

 RTM_GETQDISC,


 RTM_NEWTCLASS = 40,

 RTM_DELTCLASS,

 RTM_GETTCLASS,


 RTM_NEWTFILTER = 44,

 RTM_DELTFILTER,

 RTM_GETTFILTER,


 RTM_NEWACTION = 48,

 RTM_DELACTION,

 RTM_GETACTION,


 RTM_NEWPREFIX = 52,


 RTM_GETMULTICAST = 58,


 RTM_GETANYCAST = 62,


 RTM_NEWNEIGHTBL = 64,

 RTM_GETNEIGHTBL = 66,

 RTM_SETNEIGHTBL,


 RTM_NEWNDUSEROPT = 68,


 RTM_NEWADDRLABEL = 72,

 RTM_DELADDRLABEL,

 RTM_GETADDRLABEL,


 RTM_GETDCB = 78,

 RTM_SETDCB,


 RTM_NEWNETCONF = 80,

 RTM_DELNETCONF,

 RTM_GETNETCONF = 82,


 RTM_NEWMDB = 84,

 RTM_DELMDB = 85,

 RTM_GETMDB = 86,


 RTM_NEWNSID = 88,

 RTM_DELNSID = 89,

 RTM_GETNSID = 90,


 RTM_NEWSTATS = 92,

 RTM_GETSTATS = 94,

 RTM_SETSTATS,


 RTM_NEWCACHEREPORT = 96,


 RTM_NEWCHAIN = 100,

 RTM_DELCHAIN,

 RTM_GETCHAIN,


 RTM_NEWNEXTHOP = 104,

 RTM_DELNEXTHOP,

 RTM_GETNEXTHOP,


 RTM_NEWLINKPROP = 108,

 RTM_DELLINKPROP,

 RTM_GETLINKPROP,


 RTM_NEWVLAN = 112,

 RTM_DELVLAN,

 RTM_GETVLAN,


 RTM_NEWNEXTHOPBUCKET = 116,

 RTM_DELNEXTHOPBUCKET,

 RTM_GETNEXTHOPBUCKET,


 RTM_NEWTUNNEL = 120,

 RTM_DELTUNNEL,

 RTM_GETTUNNEL,


 __RTM_MAX,

};
# 211 "../include/uapi/linux/rtnetlink.h"
struct rtattr {
 unsigned short rta_len;
 unsigned short rta_type;
};
# 237 "../include/uapi/linux/rtnetlink.h"
struct rtmsg {
 unsigned char rtm_family;
 unsigned char rtm_dst_len;
 unsigned char rtm_src_len;
 unsigned char rtm_tos;

 unsigned char rtm_table;
 unsigned char rtm_protocol;
 unsigned char rtm_scope;
 unsigned char rtm_type;

 unsigned rtm_flags;
};



enum {
 RTN_UNSPEC,
 RTN_UNICAST,
 RTN_LOCAL,
 RTN_BROADCAST,

 RTN_ANYCAST,

 RTN_MULTICAST,
 RTN_BLACKHOLE,
 RTN_UNREACHABLE,
 RTN_PROHIBIT,
 RTN_THROW,
 RTN_NAT,
 RTN_XRESOLVE,
 __RTN_MAX
};
# 320 "../include/uapi/linux/rtnetlink.h"
enum rt_scope_t {
 RT_SCOPE_UNIVERSE=0,

 RT_SCOPE_SITE=200,
 RT_SCOPE_LINK=253,
 RT_SCOPE_HOST=254,
 RT_SCOPE_NOWHERE=255
};
# 347 "../include/uapi/linux/rtnetlink.h"
enum rt_class_t {
 RT_TABLE_UNSPEC=0,

 RT_TABLE_COMPAT=252,
 RT_TABLE_DEFAULT=253,
 RT_TABLE_MAIN=254,
 RT_TABLE_LOCAL=255,
 RT_TABLE_MAX=0xFFFFFFFF
};




enum rtattr_type_t {
 RTA_UNSPEC,
 RTA_DST,
 RTA_SRC,
 RTA_IIF,
 RTA_OIF,
 RTA_GATEWAY,
 RTA_PRIORITY,
 RTA_PREFSRC,
 RTA_METRICS,
 RTA_MULTIPATH,
 RTA_PROTOINFO,
 RTA_FLOW,
 RTA_CACHEINFO,
 RTA_SESSION,
 RTA_MP_ALGO,
 RTA_TABLE,
 RTA_MARK,
 RTA_MFC_STATS,
 RTA_VIA,
 RTA_NEWDST,
 RTA_PREF,
 RTA_ENCAP_TYPE,
 RTA_ENCAP,
 RTA_EXPIRES,
 RTA_PAD,
 RTA_UID,
 RTA_TTL_PROPAGATE,
 RTA_IP_PROTO,
 RTA_SPORT,
 RTA_DPORT,
 RTA_NH_ID,
 __RTA_MAX
};
# 409 "../include/uapi/linux/rtnetlink.h"
struct rtnexthop {
 unsigned short rtnh_len;
 unsigned char rtnh_flags;
 unsigned char rtnh_hops;
 int rtnh_ifindex;
};
# 441 "../include/uapi/linux/rtnetlink.h"
struct rtvia {
 __kernel_sa_family_t rtvia_family;
 __u8 rtvia_addr[];
};



struct rta_cacheinfo {
 __u32 rta_clntref;
 __u32 rta_lastuse;
 __s32 rta_expires;
 __u32 rta_error;
 __u32 rta_used;


 __u32 rta_id;
 __u32 rta_ts;
 __u32 rta_tsage;
};



enum {
 RTAX_UNSPEC,

 RTAX_LOCK,

 RTAX_MTU,

 RTAX_WINDOW,

 RTAX_RTT,

 RTAX_RTTVAR,

 RTAX_SSTHRESH,

 RTAX_CWND,

 RTAX_ADVMSS,

 RTAX_REORDERING,

 RTAX_HOPLIMIT,

 RTAX_INITCWND,

 RTAX_FEATURES,

 RTAX_RTO_MIN,

 RTAX_INITRWND,

 RTAX_QUICKACK,

 RTAX_CC_ALGO,

 RTAX_FASTOPEN_NO_COOKIE,

 __RTAX_MAX
};
# 517 "../include/uapi/linux/rtnetlink.h"
struct rta_session {
 __u8 proto;
 __u8 pad1;
 __u16 pad2;

 union {
  struct {
   __u16 sport;
   __u16 dport;
  } ports;

  struct {
   __u8 type;
   __u8 code;
   __u16 ident;
  } icmpt;

  __u32 spi;
 } u;
};

struct rta_mfc_stats {
 __u64 mfcs_packets;
 __u64 mfcs_bytes;
 __u64 mfcs_wrong_if;
};





struct rtgenmsg {
 unsigned char rtgen_family;
};
# 561 "../include/uapi/linux/rtnetlink.h"
struct ifinfomsg {
 unsigned char ifi_family;
 unsigned char __ifi_pad;
 unsigned short ifi_type;
 int ifi_index;
 unsigned ifi_flags;
 unsigned ifi_change;
};





struct prefixmsg {
 unsigned char prefix_family;
 unsigned char prefix_pad1;
 unsigned short prefix_pad2;
 int prefix_ifindex;
 unsigned char prefix_type;
 unsigned char prefix_len;
 unsigned char prefix_flags;
 unsigned char prefix_pad3;
};

enum
{
 PREFIX_UNSPEC,
 PREFIX_ADDRESS,
 PREFIX_CACHEINFO,
 __PREFIX_MAX
};



struct prefix_cacheinfo {
 __u32 preferred_time;
 __u32 valid_time;
};






struct tcmsg {
 unsigned char tcm_family;
 unsigned char tcm__pad1;
 unsigned short tcm__pad2;
 int tcm_ifindex;
 __u32 tcm_handle;
 __u32 tcm_parent;




 __u32 tcm_info;
};







enum {
 TCA_UNSPEC,
 TCA_KIND,
 TCA_OPTIONS,
 TCA_STATS,
 TCA_XSTATS,
 TCA_RATE,
 TCA_FCNT,
 TCA_STATS2,
 TCA_STAB,
 TCA_PAD,
 TCA_DUMP_INVISIBLE,
 TCA_CHAIN,
 TCA_HW_OFFLOAD,
 TCA_INGRESS_BLOCK,
 TCA_EGRESS_BLOCK,
 TCA_DUMP_FLAGS,
 TCA_EXT_WARN_MSG,
 __TCA_MAX
};
# 660 "../include/uapi/linux/rtnetlink.h"
struct nduseroptmsg {
 unsigned char nduseropt_family;
 unsigned char nduseropt_pad1;
 unsigned short nduseropt_opts_len;
 int nduseropt_ifindex;
 __u8 nduseropt_icmp_type;
 __u8 nduseropt_icmp_code;
 unsigned short nduseropt_pad2;
 unsigned int nduseropt_pad3;

};

enum {
 NDUSEROPT_UNSPEC,
 NDUSEROPT_SRCADDR,
 __NDUSEROPT_MAX
};
# 704 "../include/uapi/linux/rtnetlink.h"
enum rtnetlink_groups {
 RTNLGRP_NONE,

 RTNLGRP_LINK,

 RTNLGRP_NOTIFY,

 RTNLGRP_NEIGH,

 RTNLGRP_TC,

 RTNLGRP_IPV4_IFADDR,

 RTNLGRP_IPV4_MROUTE,

 RTNLGRP_IPV4_ROUTE,

 RTNLGRP_IPV4_RULE,

 RTNLGRP_IPV6_IFADDR,

 RTNLGRP_IPV6_MROUTE,

 RTNLGRP_IPV6_ROUTE,

 RTNLGRP_IPV6_IFINFO,

 RTNLGRP_DECnet_IFADDR,

 RTNLGRP_NOP2,
 RTNLGRP_DECnet_ROUTE,

 RTNLGRP_DECnet_RULE,

 RTNLGRP_NOP4,
 RTNLGRP_IPV6_PREFIX,

 RTNLGRP_IPV6_RULE,

 RTNLGRP_ND_USEROPT,

 RTNLGRP_PHONET_IFADDR,

 RTNLGRP_PHONET_ROUTE,

 RTNLGRP_DCB,

 RTNLGRP_IPV4_NETCONF,

 RTNLGRP_IPV6_NETCONF,

 RTNLGRP_MDB,

 RTNLGRP_MPLS_ROUTE,

 RTNLGRP_NSID,

 RTNLGRP_MPLS_NETCONF,

 RTNLGRP_IPV4_MROUTE_R,

 RTNLGRP_IPV6_MROUTE_R,

 RTNLGRP_NEXTHOP,

 RTNLGRP_BRVLAN,

 RTNLGRP_MCTP_IFADDR,

 RTNLGRP_TUNNEL,

 RTNLGRP_STATS,

 __RTNLGRP_MAX
};



struct tcamsg {
 unsigned char tca_family;
 unsigned char tca__pad1;
 unsigned short tca__pad2;
};

enum {
 TCA_ROOT_UNSPEC,
 TCA_ROOT_TAB,


 TCA_ROOT_FLAGS,
 TCA_ROOT_COUNT,
 TCA_ROOT_TIME_DELTA,
 TCA_ROOT_EXT_WARN_MSG,
 __TCA_ROOT_MAX,

};
# 12 "../include/linux/rtnetlink.h" 2

extern int rtnetlink_send(struct sk_buff *skb, struct net *net, u32 pid, u32 group, int echo);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rtnetlink_maybe_send(struct sk_buff *skb, struct net *net,
           u32 pid, u32 group, int echo)
{
 return !skb ? 0 : rtnetlink_send(skb, net, pid, group, echo);
}

extern int rtnl_unicast(struct sk_buff *skb, struct net *net, u32 pid);
extern void rtnl_notify(struct sk_buff *skb, struct net *net, u32 pid,
   u32 group, const struct nlmsghdr *nlh, gfp_t flags);
extern void rtnl_set_sk_err(struct net *net, u32 group, int error);
extern int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics);
extern int rtnl_put_cacheinfo(struct sk_buff *skb, struct dst_entry *dst,
         u32 id, long expires, u32 error);

void rtmsg_ifinfo(int type, struct net_device *dev, unsigned int change, gfp_t flags,
    u32 portid, const struct nlmsghdr *nlh);
void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change,
    gfp_t flags, int *new_nsid, int new_ifindex);
struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
           unsigned change, u32 event,
           gfp_t flags, int *new_nsid,
           int new_ifindex, u32 portid,
           const struct nlmsghdr *nlh);
void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev,
         gfp_t flags, u32 portid, const struct nlmsghdr *nlh);



extern void rtnl_lock(void);
extern void rtnl_unlock(void);
extern int rtnl_trylock(void);
extern int rtnl_is_locked(void);
extern int rtnl_lock_killable(void);
extern bool refcount_dec_and_rtnl_lock(refcount_t *r);

typedef struct { void *lock; ; } class_rtnl_t; static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void class_rtnl_destructor(class_rtnl_t *_T) { if (_T->lock) { rtnl_unlock(); } } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *class_rtnl_lock_ptr(class_rtnl_t *_T) { return _T->lock; } static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) class_rtnl_t class_rtnl_constructor(void) { class_rtnl_t _t = { .lock = (void*)1 }, *_T __attribute__((__unused__)) = &_t; rtnl_lock(); return _t; }

extern wait_queue_head_t netdev_unregistering_wq;
extern atomic_t dev_unreg_count;
extern struct rw_semaphore pernet_ops_rwsem;
extern struct rw_semaphore net_rwsem;


extern bool lockdep_rtnl_is_held(void);
# 98 "../include/linux/rtnetlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct netdev_queue *dev_ingress_queue(struct net_device *dev)
{
 return ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lockdep_rtnl_is_held()))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rtnetlink.h", 100, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(dev->ingress_queue)) *)((dev->ingress_queue))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct netdev_queue *dev_ingress_queue_rcu(struct net_device *dev)
{
 return ({ typeof(*(dev->ingress_queue)) *__UNIQUE_ID_rcu388 = (typeof(*(dev->ingress_queue)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_389(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((dev->ingress_queue)) == sizeof(char) || sizeof((dev->ingress_queue)) == sizeof(short) || sizeof((dev->ingress_queue)) == sizeof(int) || sizeof((dev->ingress_queue)) == sizeof(long)) || sizeof((dev->ingress_queue)) == sizeof(long long))) __compiletime_assert_389(); } while (0); (*(const volatile typeof( _Generic(((dev->ingress_queue)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((dev->ingress_queue)))) *)&((dev->ingress_queue))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rtnetlink.h", 105, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(dev->ingress_queue)) *)(__UNIQUE_ID_rcu388)); });
}

struct netdev_queue *dev_ingress_queue_create(struct net_device *dev);


void net_inc_ingress_queue(void);
void net_dec_ingress_queue(void);



void net_inc_egress_queue(void);
void net_dec_egress_queue(void);
void netdev_xmit_skip_txqueue(bool skip);


void rtnetlink_init(void);
void __rtnl_unlock(void);
void rtnl_kfree_skbs(struct sk_buff *head, struct sk_buff *tail);





extern int ndo_dflt_fdb_dump(struct sk_buff *skb,
        struct netlink_callback *cb,
        struct net_device *dev,
        struct net_device *filter_dev,
        int *idx);
extern int ndo_dflt_fdb_add(struct ndmsg *ndm,
       struct nlattr *tb[],
       struct net_device *dev,
       const unsigned char *addr,
       u16 vid,
       u16 flags);
extern int ndo_dflt_fdb_del(struct ndmsg *ndm,
       struct nlattr *tb[],
       struct net_device *dev,
       const unsigned char *addr,
       u16 vid);

extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
       struct net_device *dev, u16 mode,
       u32 flags, u32 mask, int nlflags,
       u32 filter_mask,
       int (*vlan_fill)(struct sk_buff *skb,
          struct net_device *dev,
          u32 filter_mask));

extern void rtnl_offload_xstats_notify(struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rtnl_has_listeners(const struct net *net, u32 group)
{
 struct sock *rtnl = net->rtnl;

 return netlink_has_listeners(rtnl, group);
}
# 172 "../include/linux/rtnetlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
rtnl_notify_needed(const struct net *net, u16 nlflags, u32 group)
{
 return (nlflags & 0x08) || rtnl_has_listeners(net, group);
}

void netdev_set_operstate(struct net_device *dev, int newstate);
# 15 "../include/net/dst.h" 2




# 1 "../include/linux/rcuref.h" 1
# 24 "../include/linux/rcuref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rcuref_init(rcuref_t *ref, unsigned int cnt)
{
 atomic_set(&ref->refcnt, cnt - 1);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int rcuref_read(rcuref_t *ref)
{
 unsigned int c = atomic_read(&ref->refcnt);


 return c >= 0xC0000000U ? 0 : c + 1;
}

extern __attribute__((__warn_unused_result__)) bool rcuref_get_slowpath(rcuref_t *ref);
# 61 "../include/linux/rcuref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool rcuref_get(rcuref_t *ref)
{




 if (__builtin_expect(!!(!atomic_add_negative_relaxed(1, &ref->refcnt)), 1))
  return true;


 return rcuref_get_slowpath(ref);
}

extern __attribute__((__warn_unused_result__)) bool rcuref_put_slowpath(rcuref_t *ref);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) __attribute__((__warn_unused_result__)) bool __rcuref_put(rcuref_t *ref)
{
 do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!rcu_read_lock_held() && 0) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/linux/rcuref.h", 82, "suspicious rcuref_put_rcusafe() usage"); } } while (0);





 if (__builtin_expect(!!(!atomic_add_negative_release(-1, &ref->refcnt)), 1))
  return false;





 return rcuref_put_slowpath(ref);
}
# 119 "../include/linux/rcuref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool rcuref_put_rcusafe(rcuref_t *ref)
{
 return __rcuref_put(ref);
}
# 145 "../include/linux/rcuref.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool rcuref_put(rcuref_t *ref)
{
 bool released;

 __asm__ __volatile__("": : :"memory");
 released = __rcuref_put(ref);
 __asm__ __volatile__("": : :"memory");
 return released;
}
# 20 "../include/net/dst.h" 2
# 1 "../include/net/neighbour.h" 1
# 31 "../include/net/neighbour.h"
# 1 "../include/net/rtnetlink.h" 1





# 1 "../include/net/netlink.h" 1
# 172 "../include/net/netlink.h"
enum {
 NLA_UNSPEC,
 NLA_U8,
 NLA_U16,
 NLA_U32,
 NLA_U64,
 NLA_STRING,
 NLA_FLAG,
 NLA_MSECS,
 NLA_NESTED,
 NLA_NESTED_ARRAY,
 NLA_NUL_STRING,
 NLA_BINARY,
 NLA_S8,
 NLA_S16,
 NLA_S32,
 NLA_S64,
 NLA_BITFIELD32,
 NLA_REJECT,
 NLA_BE16,
 NLA_BE32,
 NLA_SINT,
 NLA_UINT,
 __NLA_TYPE_MAX,
};



struct netlink_range_validation {
 u64 min, max;
};

struct netlink_range_validation_signed {
 s64 min, max;
};

enum nla_policy_validation {
 NLA_VALIDATE_NONE,
 NLA_VALIDATE_RANGE,
 NLA_VALIDATE_RANGE_WARN_TOO_LONG,
 NLA_VALIDATE_MIN,
 NLA_VALIDATE_MAX,
 NLA_VALIDATE_MASK,
 NLA_VALIDATE_RANGE_PTR,
 NLA_VALIDATE_FUNCTION,
};
# 335 "../include/net/netlink.h"
struct nla_policy {
 u8 type;
 u8 validation_type;
 u16 len;
 union {
# 361 "../include/net/netlink.h"
  u16 strict_start_type;


  const u32 bitfield32_valid;
  const u32 mask;
  const char *reject_message;
  const struct nla_policy *nested_policy;
  const struct netlink_range_validation *range;
  const struct netlink_range_validation_signed *range_signed;
  struct {
   s16 min, max;
  };
  int (*validate)(const struct nlattr *attr,
    struct netlink_ext_ack *extack);
 };
};
# 481 "../include/net/netlink.h"
struct nl_info {
 struct nlmsghdr *nlh;
 struct net *nl_net;
 u32 portid;
 u8 skip_notify:1,
    skip_notify_kernel:1;
};
# 506 "../include/net/netlink.h"
enum netlink_validation {
 NL_VALIDATE_LIBERAL = 0,
 NL_VALIDATE_TRAILING = ((((1UL))) << (0)),
 NL_VALIDATE_MAXTYPE = ((((1UL))) << (1)),
 NL_VALIDATE_UNSPEC = ((((1UL))) << (2)),
 NL_VALIDATE_STRICT_ATTRS = ((((1UL))) << (3)),
 NL_VALIDATE_NESTED = ((((1UL))) << (4)),
};
# 523 "../include/net/netlink.h"
int netlink_rcv_skb(struct sk_buff *skb,
      int (*cb)(struct sk_buff *, struct nlmsghdr *,
         struct netlink_ext_ack *));
int nlmsg_notify(struct sock *sk, struct sk_buff *skb, u32 portid,
   unsigned int group, int report, gfp_t flags);

int __nla_validate(const struct nlattr *head, int len, int maxtype,
     const struct nla_policy *policy, unsigned int validate,
     struct netlink_ext_ack *extack);
int __nla_parse(struct nlattr **tb, int maxtype, const struct nlattr *head,
  int len, const struct nla_policy *policy, unsigned int validate,
  struct netlink_ext_ack *extack);
int nla_policy_len(const struct nla_policy *, int);
struct nlattr *nla_find(const struct nlattr *head, int len, int attrtype);
ssize_t nla_strscpy(char *dst, const struct nlattr *nla, size_t dstsize);
char *nla_strdup(const struct nlattr *nla, gfp_t flags);
int nla_memcpy(void *dest, const struct nlattr *src, int count);
int nla_memcmp(const struct nlattr *nla, const void *data, size_t size);
int nla_strcmp(const struct nlattr *nla, const char *str);
struct nlattr *__nla_reserve(struct sk_buff *skb, int attrtype, int attrlen);
struct nlattr *__nla_reserve_64bit(struct sk_buff *skb, int attrtype,
       int attrlen, int padattr);
void *__nla_reserve_nohdr(struct sk_buff *skb, int attrlen);
struct nlattr *nla_reserve(struct sk_buff *skb, int attrtype, int attrlen);
struct nlattr *nla_reserve_64bit(struct sk_buff *skb, int attrtype,
     int attrlen, int padattr);
void *nla_reserve_nohdr(struct sk_buff *skb, int attrlen);
void __nla_put(struct sk_buff *skb, int attrtype, int attrlen,
        const void *data);
void __nla_put_64bit(struct sk_buff *skb, int attrtype, int attrlen,
       const void *data, int padattr);
void __nla_put_nohdr(struct sk_buff *skb, int attrlen, const void *data);
int nla_put(struct sk_buff *skb, int attrtype, int attrlen, const void *data);
int nla_put_64bit(struct sk_buff *skb, int attrtype, int attrlen,
    const void *data, int padattr);
int nla_put_nohdr(struct sk_buff *skb, int attrlen, const void *data);
int nla_append(struct sk_buff *skb, int attrlen, const void *data);
# 569 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_msg_size(int payload)
{
 return ((int) ( ((sizeof(struct nlmsghdr))+4U -1) & ~(4U -1) )) + payload;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_total_size(int payload)
{
 return ( ((nlmsg_msg_size(payload))+4U -1) & ~(4U -1) );
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_padlen(int payload)
{
 return nlmsg_total_size(payload) - nlmsg_msg_size(payload);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *nlmsg_data(const struct nlmsghdr *nlh)
{
 return (unsigned char *) nlh + ((int) ( ((sizeof(struct nlmsghdr))+4U -1) & ~(4U -1) ));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_len(const struct nlmsghdr *nlh)
{
 return nlh->nlmsg_len - ((int) ( ((sizeof(struct nlmsghdr))+4U -1) & ~(4U -1) ));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *nlmsg_attrdata(const struct nlmsghdr *nlh,
         int hdrlen)
{
 unsigned char *data = nlmsg_data(nlh);
 return (struct nlattr *) (data + ( ((hdrlen)+4U -1) & ~(4U -1) ));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_attrlen(const struct nlmsghdr *nlh, int hdrlen)
{
 return nlmsg_len(nlh) - ( ((hdrlen)+4U -1) & ~(4U -1) );
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_ok(const struct nlmsghdr *nlh, int remaining)
{
 return (remaining >= (int) sizeof(struct nlmsghdr) &&
  nlh->nlmsg_len >= sizeof(struct nlmsghdr) &&
  nlh->nlmsg_len <= remaining);
}
# 652 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlmsghdr *
nlmsg_next(const struct nlmsghdr *nlh, int *remaining)
{
 int totlen = ( ((nlh->nlmsg_len)+4U -1) & ~(4U -1) );

 *remaining -= totlen;

 return (struct nlmsghdr *) ((unsigned char *) nlh + totlen);
}
# 678 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_parse(struct nlattr **tb, int maxtype,
       const struct nlattr *head, int len,
       const struct nla_policy *policy,
       struct netlink_ext_ack *extack)
{
 return __nla_parse(tb, maxtype, head, len, policy,
      (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE | NL_VALIDATE_UNSPEC | NL_VALIDATE_STRICT_ATTRS | NL_VALIDATE_NESTED), extack);
}
# 703 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_parse_deprecated(struct nlattr **tb, int maxtype,
           const struct nlattr *head, int len,
           const struct nla_policy *policy,
           struct netlink_ext_ack *extack)
{
 return __nla_parse(tb, maxtype, head, len, policy,
      NL_VALIDATE_LIBERAL, extack);
}
# 728 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_parse_deprecated_strict(struct nlattr **tb, int maxtype,
           const struct nlattr *head,
           int len,
           const struct nla_policy *policy,
           struct netlink_ext_ack *extack)
{
 return __nla_parse(tb, maxtype, head, len, policy,
      (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE), extack);
}
# 750 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __nlmsg_parse(const struct nlmsghdr *nlh, int hdrlen,
    struct nlattr *tb[], int maxtype,
    const struct nla_policy *policy,
    unsigned int validate,
    struct netlink_ext_ack *extack)
{
 if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen)) {
  do { static const char __msg[] = "Invalid header length"; struct netlink_ext_ack *__extack = (extack); do_trace_netlink_extack(__msg); if (__extack) __extack->_msg = __msg; } while (0);
  return -22;
 }

 return __nla_parse(tb, maxtype, nlmsg_attrdata(nlh, hdrlen),
      nlmsg_attrlen(nlh, hdrlen), policy, validate,
      extack);
}
# 777 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_parse(const struct nlmsghdr *nlh, int hdrlen,
         struct nlattr *tb[], int maxtype,
         const struct nla_policy *policy,
         struct netlink_ext_ack *extack)
{
 return __nlmsg_parse(nlh, hdrlen, tb, maxtype, policy,
        (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE | NL_VALIDATE_UNSPEC | NL_VALIDATE_STRICT_ATTRS | NL_VALIDATE_NESTED), extack);
}
# 797 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_parse_deprecated(const struct nlmsghdr *nlh, int hdrlen,
      struct nlattr *tb[], int maxtype,
      const struct nla_policy *policy,
      struct netlink_ext_ack *extack)
{
 return __nlmsg_parse(nlh, hdrlen, tb, maxtype, policy,
        NL_VALIDATE_LIBERAL, extack);
}
# 817 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
nlmsg_parse_deprecated_strict(const struct nlmsghdr *nlh, int hdrlen,
         struct nlattr *tb[], int maxtype,
         const struct nla_policy *policy,
         struct netlink_ext_ack *extack)
{
 return __nlmsg_parse(nlh, hdrlen, tb, maxtype, policy,
        (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE), extack);
}
# 835 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *nlmsg_find_attr(const struct nlmsghdr *nlh,
          int hdrlen, int attrtype)
{
 return nla_find(nlmsg_attrdata(nlh, hdrlen),
   nlmsg_attrlen(nlh, hdrlen), attrtype);
}
# 856 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_validate_deprecated(const struct nlattr *head, int len,
       int maxtype,
       const struct nla_policy *policy,
       struct netlink_ext_ack *extack)
{
 return __nla_validate(head, len, maxtype, policy, NL_VALIDATE_LIBERAL,
         extack);
}
# 879 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_validate(const struct nlattr *head, int len, int maxtype,
          const struct nla_policy *policy,
          struct netlink_ext_ack *extack)
{
 return __nla_validate(head, len, maxtype, policy, (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE | NL_VALIDATE_UNSPEC | NL_VALIDATE_STRICT_ATTRS | NL_VALIDATE_NESTED),
         extack);
}
# 895 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_validate_deprecated(const struct nlmsghdr *nlh,
         int hdrlen, int maxtype,
         const struct nla_policy *policy,
         struct netlink_ext_ack *extack)
{
 if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen))
  return -22;

 return __nla_validate(nlmsg_attrdata(nlh, hdrlen),
         nlmsg_attrlen(nlh, hdrlen), maxtype,
         policy, NL_VALIDATE_LIBERAL, extack);
}
# 916 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_report(const struct nlmsghdr *nlh)
{
 return nlh ? !!(nlh->nlmsg_flags & 0x08) : 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 nlmsg_seq(const struct nlmsghdr *nlh)
{
 return nlh ? nlh->nlmsg_seq : 0;
}
# 955 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlmsghdr *nlmsg_put(struct sk_buff *skb, u32 portid, u32 seq,
      int type, int payload, int flags)
{
 if (__builtin_expect(!!(skb_tailroom(skb) < nlmsg_total_size(payload)), 0))
  return ((void *)0);

 return __nlmsg_put(skb, portid, seq, type, payload, flags);
}
# 974 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *nlmsg_append(struct sk_buff *skb, u32 size)
{
 if (__builtin_expect(!!(skb_tailroom(skb) < ( ((size)+4U -1) & ~(4U -1) )), 0))
  return ((void *)0);

 if (( ((size)+4U -1) & ~(4U -1) ) - size)
  memset(skb_tail_pointer(skb) + size, 0,
         ( ((size)+4U -1) & ~(4U -1) ) - size);
 return __skb_put(skb, ( ((size)+4U -1) & ~(4U -1) ));
}
# 996 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlmsghdr *nlmsg_put_answer(struct sk_buff *skb,
      struct netlink_callback *cb,
      int type, int payload,
      int flags)
{
 return nlmsg_put(skb, (*(struct netlink_skb_parms*)&((cb->skb)->cb)).portid, cb->nlh->nlmsg_seq,
    type, payload, flags);
}
# 1013 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *nlmsg_new(size_t payload, gfp_t flags)
{
 return alloc_skb(nlmsg_total_size(payload), flags);
}
# 1027 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *nlmsg_new_large(size_t payload)
{
 return netlink_alloc_large_skb(nlmsg_total_size(payload), 0);
}
# 1041 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nlmsg_end(struct sk_buff *skb, struct nlmsghdr *nlh)
{
 nlh->nlmsg_len = skb_tail_pointer(skb) - (unsigned char *)nlh;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *nlmsg_get_pos(struct sk_buff *skb)
{
 return skb_tail_pointer(skb);
}
# 1064 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nlmsg_trim(struct sk_buff *skb, const void *mark)
{
 if (mark) {
  ({ int __ret_warn_on = !!((unsigned char *) mark < skb->data); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/netlink.h", 1067, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  skb_trim(skb, (unsigned char *) mark - skb->data);
 }
}
# 1080 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nlmsg_cancel(struct sk_buff *skb, struct nlmsghdr *nlh)
{
 nlmsg_trim(skb, nlh);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nlmsg_free(struct sk_buff *skb)
{
 kfree_skb(skb);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nlmsg_consume(struct sk_buff *skb)
{
 consume_skb(skb);
}
# 1115 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_multicast_filtered(struct sock *sk, struct sk_buff *skb,
        u32 portid, unsigned int group,
        gfp_t flags,
        netlink_filter_fn filter,
        void *filter_data)
{
 int err;

 (*(struct netlink_skb_parms*)&((skb)->cb)).dst_group = group;

 err = netlink_broadcast_filtered(sk, skb, portid, group, flags,
      filter, filter_data);
 if (err > 0)
  err = 0;

 return err;
}
# 1141 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_multicast(struct sock *sk, struct sk_buff *skb,
      u32 portid, unsigned int group, gfp_t flags)
{
 return nlmsg_multicast_filtered(sk, skb, portid, group, flags,
     ((void *)0), ((void *)0));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nlmsg_unicast(struct sock *sk, struct sk_buff *skb, u32 portid)
{
 int err;

 err = netlink_unicast(sk, skb, portid, 0x40);
 if (err > 0)
  err = 0;

 return err;
}
# 1192 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
nl_dump_check_consistent(struct netlink_callback *cb,
    struct nlmsghdr *nlh)
{
 if (cb->prev_seq && cb->seq != cb->prev_seq)
  nlh->nlmsg_flags |= 0x10;
 cb->prev_seq = cb->seq;
}
# 1209 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_attr_size(int payload)
{
 return ((int) (((sizeof(struct nlattr)) + 4 - 1) & ~(4 - 1))) + payload;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_total_size(int payload)
{
 return (((nla_attr_size(payload)) + 4 - 1) & ~(4 - 1));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_padlen(int payload)
{
 return nla_total_size(payload) - nla_attr_size(payload);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_type(const struct nlattr *nla)
{
 return nla->nla_type & ~((1 << 15) | (1 << 14));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *nla_data(const struct nlattr *nla)
{
 return (char *) nla + ((int) (((sizeof(struct nlattr)) + 4 - 1) & ~(4 - 1)));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 nla_len(const struct nlattr *nla)
{
 return nla->nla_len - ((int) (((sizeof(struct nlattr)) + 4 - 1) & ~(4 - 1)));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_ok(const struct nlattr *nla, int remaining)
{
 return remaining >= (int) sizeof(*nla) &&
        nla->nla_len >= sizeof(*nla) &&
        nla->nla_len <= remaining;
}
# 1279 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *nla_next(const struct nlattr *nla, int *remaining)
{
 unsigned int totlen = (((nla->nla_len) + 4 - 1) & ~(4 - 1));

 *remaining -= totlen;
 return (struct nlattr *) ((char *) nla + totlen);
}
# 1294 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *
nla_find_nested(const struct nlattr *nla, int attrtype)
{
 return nla_find(nla_data(nla), nla_len(nla), attrtype);
}
# 1310 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_parse_nested(struct nlattr *tb[], int maxtype,
       const struct nlattr *nla,
       const struct nla_policy *policy,
       struct netlink_ext_ack *extack)
{
 if (!(nla->nla_type & (1 << 15))) {
  do { static const char __msg[] = "NLA_F_NESTED is missing"; struct netlink_ext_ack *__extack = (extack); do_trace_netlink_extack(__msg); if (__extack) { __extack->_msg = __msg; __extack->bad_attr = (nla); __extack->policy = (((void *)0)); } } while (0);
  return -22;
 }

 return __nla_parse(tb, maxtype, nla_data(nla), nla_len(nla), policy,
      (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE | NL_VALIDATE_UNSPEC | NL_VALIDATE_STRICT_ATTRS | NL_VALIDATE_NESTED), extack);
}
# 1334 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_parse_nested_deprecated(struct nlattr *tb[], int maxtype,
           const struct nlattr *nla,
           const struct nla_policy *policy,
           struct netlink_ext_ack *extack)
{
 return __nla_parse(tb, maxtype, nla_data(nla), nla_len(nla), policy,
      NL_VALIDATE_LIBERAL, extack);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_u8(struct sk_buff *skb, int attrtype, u8 value)
{

 u8 tmp = value;

 return nla_put(skb, attrtype, sizeof(u8), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_u16(struct sk_buff *skb, int attrtype, u16 value)
{
 u16 tmp = value;

 return nla_put(skb, attrtype, sizeof(u16), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_be16(struct sk_buff *skb, int attrtype, __be16 value)
{
 __be16 tmp = value;

 return nla_put(skb, attrtype, sizeof(__be16), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_net16(struct sk_buff *skb, int attrtype, __be16 value)
{
 __be16 tmp = value;

 return nla_put_be16(skb, attrtype | (1 << 14), tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_le16(struct sk_buff *skb, int attrtype, __le16 value)
{
 __le16 tmp = value;

 return nla_put(skb, attrtype, sizeof(__le16), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
 u32 tmp = value;

 return nla_put(skb, attrtype, sizeof(u32), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_uint(struct sk_buff *skb, int attrtype, u64 value)
{
 u64 tmp64 = value;
 u32 tmp32 = value;

 if (tmp64 == tmp32)
  return nla_put_u32(skb, attrtype, tmp32);
 return nla_put(skb, attrtype, sizeof(u64), &tmp64);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_be32(struct sk_buff *skb, int attrtype, __be32 value)
{
 __be32 tmp = value;

 return nla_put(skb, attrtype, sizeof(__be32), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_net32(struct sk_buff *skb, int attrtype, __be32 value)
{
 __be32 tmp = value;

 return nla_put_be32(skb, attrtype | (1 << 14), tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_le32(struct sk_buff *skb, int attrtype, __le32 value)
{
 __le32 tmp = value;

 return nla_put(skb, attrtype, sizeof(__le32), &tmp);
}
# 1484 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_u64_64bit(struct sk_buff *skb, int attrtype,
        u64 value, int padattr)
{
 u64 tmp = value;

 return nla_put_64bit(skb, attrtype, sizeof(u64), &tmp, padattr);
}
# 1499 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_be64(struct sk_buff *skb, int attrtype, __be64 value,
          int padattr)
{
 __be64 tmp = value;

 return nla_put_64bit(skb, attrtype, sizeof(__be64), &tmp, padattr);
}
# 1514 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_net64(struct sk_buff *skb, int attrtype, __be64 value,
    int padattr)
{
 __be64 tmp = value;

 return nla_put_be64(skb, attrtype | (1 << 14), tmp,
       padattr);
}
# 1530 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_le64(struct sk_buff *skb, int attrtype, __le64 value,
          int padattr)
{
 __le64 tmp = value;

 return nla_put_64bit(skb, attrtype, sizeof(__le64), &tmp, padattr);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_s8(struct sk_buff *skb, int attrtype, s8 value)
{
 s8 tmp = value;

 return nla_put(skb, attrtype, sizeof(s8), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_s16(struct sk_buff *skb, int attrtype, s16 value)
{
 s16 tmp = value;

 return nla_put(skb, attrtype, sizeof(s16), &tmp);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_s32(struct sk_buff *skb, int attrtype, s32 value)
{
 s32 tmp = value;

 return nla_put(skb, attrtype, sizeof(s32), &tmp);
}
# 1584 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_s64(struct sk_buff *skb, int attrtype, s64 value,
         int padattr)
{
 s64 tmp = value;

 return nla_put_64bit(skb, attrtype, sizeof(s64), &tmp, padattr);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_sint(struct sk_buff *skb, int attrtype, s64 value)
{
 s64 tmp64 = value;
 s32 tmp32 = value;

 if (tmp64 == tmp32)
  return nla_put_s32(skb, attrtype, tmp32);
 return nla_put(skb, attrtype, sizeof(s64), &tmp64);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_string(struct sk_buff *skb, int attrtype,
     const char *str)
{
 return nla_put(skb, attrtype, strlen(str) + 1, str);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_flag(struct sk_buff *skb, int attrtype)
{
 return nla_put(skb, attrtype, 0, ((void *)0));
}
# 1637 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_msecs(struct sk_buff *skb, int attrtype,
    unsigned long njiffies, int padattr)
{
 u64 tmp = jiffies_to_msecs(njiffies);

 return nla_put_64bit(skb, attrtype, sizeof(u64), &tmp, padattr);
}
# 1652 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_in_addr(struct sk_buff *skb, int attrtype,
      __be32 addr)
{
 __be32 tmp = addr;

 return nla_put_be32(skb, attrtype, tmp);
}
# 1667 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_in6_addr(struct sk_buff *skb, int attrtype,
       const struct in6_addr *addr)
{
 return nla_put(skb, attrtype, sizeof(*addr), addr);
}
# 1680 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_put_bitfield32(struct sk_buff *skb, int attrtype,
         __u32 value, __u32 selector)
{
 struct nla_bitfield32 tmp = { value, selector, };

 return nla_put(skb, attrtype, sizeof(tmp), &tmp);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 nla_get_u32(const struct nlattr *nla)
{
 return *(u32 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 nla_get_be32(const struct nlattr *nla)
{
 return *(__be32 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __le32 nla_get_le32(const struct nlattr *nla)
{
 return *(__le32 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 nla_get_u16(const struct nlattr *nla)
{
 return *(u16 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be16 nla_get_be16(const struct nlattr *nla)
{
 return *(__be16 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __le16 nla_get_le16(const struct nlattr *nla)
{
 return *(__le16 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 nla_get_u8(const struct nlattr *nla)
{
 return *(u8 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 nla_get_u64(const struct nlattr *nla)
{
 u64 tmp;

 nla_memcpy(&tmp, nla, sizeof(tmp));

 return tmp;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 nla_get_uint(const struct nlattr *nla)
{
 if (nla_len(nla) == sizeof(u32))
  return nla_get_u32(nla);
 return nla_get_u64(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be64 nla_get_be64(const struct nlattr *nla)
{
 __be64 tmp;

 nla_memcpy(&tmp, nla, sizeof(tmp));

 return tmp;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __le64 nla_get_le64(const struct nlattr *nla)
{
 return *(__le64 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s32 nla_get_s32(const struct nlattr *nla)
{
 return *(s32 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s16 nla_get_s16(const struct nlattr *nla)
{
 return *(s16 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s8 nla_get_s8(const struct nlattr *nla)
{
 return *(s8 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 nla_get_s64(const struct nlattr *nla)
{
 s64 tmp;

 nla_memcpy(&tmp, nla, sizeof(tmp));

 return tmp;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) s64 nla_get_sint(const struct nlattr *nla)
{
 if (nla_len(nla) == sizeof(s32))
  return nla_get_s32(nla);
 return nla_get_s64(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_get_flag(const struct nlattr *nla)
{
 return !!nla;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long nla_get_msecs(const struct nlattr *nla)
{
 u64 msecs = nla_get_u64(nla);

 return msecs_to_jiffies((unsigned long) msecs);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 nla_get_in_addr(const struct nlattr *nla)
{
 return *(__be32 *) nla_data(nla);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct in6_addr nla_get_in6_addr(const struct nlattr *nla)
{
 struct in6_addr tmp;

 nla_memcpy(&tmp, nla, sizeof(tmp));
 return tmp;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nla_bitfield32 nla_get_bitfield32(const struct nlattr *nla)
{
 struct nla_bitfield32 tmp;

 nla_memcpy(&tmp, nla, sizeof(tmp));
 return tmp;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *nla_memdup_noprof(const struct nlattr *src, gfp_t gfp)
{
 return kmemdup_noprof(nla_data(src), nla_len(src), gfp);
}
# 1925 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *nla_nest_start_noflag(struct sk_buff *skb,
         int attrtype)
{
 struct nlattr *start = (struct nlattr *)skb_tail_pointer(skb);

 if (nla_put(skb, attrtype, 0, ((void *)0)) < 0)
  return ((void *)0);

 return start;
}
# 1946 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct nlattr *nla_nest_start(struct sk_buff *skb, int attrtype)
{
 return nla_nest_start_noflag(skb, attrtype | (1 << 15));
}
# 1961 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_nest_end(struct sk_buff *skb, struct nlattr *start)
{
 start->nla_len = skb_tail_pointer(skb) - (unsigned char *)start;
 return skb->len;
}
# 1975 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void nla_nest_cancel(struct sk_buff *skb, struct nlattr *start)
{
 nlmsg_trim(skb, start);
}
# 1994 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __nla_validate_nested(const struct nlattr *start, int maxtype,
     const struct nla_policy *policy,
     unsigned int validate,
     struct netlink_ext_ack *extack)
{
 return __nla_validate(nla_data(start), nla_len(start), maxtype, policy,
         validate, extack);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
nla_validate_nested(const struct nlattr *start, int maxtype,
      const struct nla_policy *policy,
      struct netlink_ext_ack *extack)
{
 return __nla_validate_nested(start, maxtype, policy,
         (NL_VALIDATE_TRAILING | NL_VALIDATE_MAXTYPE | NL_VALIDATE_UNSPEC | NL_VALIDATE_STRICT_ATTRS | NL_VALIDATE_NESTED), extack);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
nla_validate_nested_deprecated(const struct nlattr *start, int maxtype,
          const struct nla_policy *policy,
          struct netlink_ext_ack *extack)
{
 return __nla_validate_nested(start, maxtype, policy,
         NL_VALIDATE_LIBERAL, extack);
}
# 2028 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool nla_need_padding_for_64bit(struct sk_buff *skb)
{






 if (((((unsigned long)skb_tail_pointer(skb)) & ((typeof((unsigned long)skb_tail_pointer(skb)))(8) - 1)) == 0))
  return true;

 return false;
}
# 2054 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_align_64bit(struct sk_buff *skb, int padattr)
{
 if (nla_need_padding_for_64bit(skb) &&
     !nla_reserve(skb, padattr, 0))
  return -90;

 return 0;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int nla_total_size_64bit(int payload)
{
 return (((nla_attr_size(payload)) + 4 - 1) & ~(4 - 1))

  + (((nla_attr_size(0)) + 4 - 1) & ~(4 - 1))

  ;
}
# 2125 "../include/net/netlink.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool nla_is_last(const struct nlattr *nla, int rem)
{
 return nla->nla_len == rem;
}

void nla_get_range_unsigned(const struct nla_policy *pt,
       struct netlink_range_validation *range);
void nla_get_range_signed(const struct nla_policy *pt,
     struct netlink_range_validation_signed *range);

struct netlink_policy_dump_state;

int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
       const struct nla_policy *policy,
       unsigned int maxtype);
int netlink_policy_dump_get_policy_idx(struct netlink_policy_dump_state *state,
           const struct nla_policy *policy,
           unsigned int maxtype);
bool netlink_policy_dump_loop(struct netlink_policy_dump_state *state);
int netlink_policy_dump_write(struct sk_buff *skb,
         struct netlink_policy_dump_state *state);
int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt);
int netlink_policy_dump_write_attr(struct sk_buff *skb,
       const struct nla_policy *pt,
       int nestattr);
void netlink_policy_dump_free(struct netlink_policy_dump_state *state);
# 7 "../include/net/rtnetlink.h" 2

typedef int (*rtnl_doit_func)(struct sk_buff *, struct nlmsghdr *,
         struct netlink_ext_ack *);
typedef int (*rtnl_dumpit_func)(struct sk_buff *, struct netlink_callback *);

enum rtnl_link_flags {
 RTNL_FLAG_DOIT_UNLOCKED = ((((1UL))) << (0)),
 RTNL_FLAG_BULK_DEL_SUPPORTED = ((((1UL))) << (1)),
 RTNL_FLAG_DUMP_UNLOCKED = ((((1UL))) << (2)),
 RTNL_FLAG_DUMP_SPLIT_NLM_DONE = ((((1UL))) << (3)),
};

enum rtnl_kinds {
 RTNL_KIND_NEW,
 RTNL_KIND_DEL,
 RTNL_KIND_GET,
 RTNL_KIND_SET
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum rtnl_kinds rtnl_msgtype_kind(int msgtype)
{
 return msgtype & 0x3;
}

void rtnl_register(int protocol, int msgtype,
     rtnl_doit_func, rtnl_dumpit_func, unsigned int flags);
int rtnl_register_module(struct module *owner, int protocol, int msgtype,
    rtnl_doit_func, rtnl_dumpit_func, unsigned int flags);
int rtnl_unregister(int protocol, int msgtype);
void rtnl_unregister_all(int protocol);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rtnl_msg_family(const struct nlmsghdr *nlh)
{
 if (nlmsg_len(nlh) >= sizeof(struct rtgenmsg))
  return ((struct rtgenmsg *) nlmsg_data(nlh))->rtgen_family;
 else
  return 0;
}
# 79 "../include/net/rtnetlink.h"
struct rtnl_link_ops {
 struct list_head list;

 const char *kind;

 size_t priv_size;
 struct net_device *(*alloc)(struct nlattr *tb[],
       const char *ifname,
       unsigned char name_assign_type,
       unsigned int num_tx_queues,
       unsigned int num_rx_queues);
 void (*setup)(struct net_device *dev);

 bool netns_refund;
 unsigned int maxtype;
 const struct nla_policy *policy;
 int (*validate)(struct nlattr *tb[],
         struct nlattr *data[],
         struct netlink_ext_ack *extack);

 int (*newlink)(struct net *src_net,
        struct net_device *dev,
        struct nlattr *tb[],
        struct nlattr *data[],
        struct netlink_ext_ack *extack);
 int (*changelink)(struct net_device *dev,
           struct nlattr *tb[],
           struct nlattr *data[],
           struct netlink_ext_ack *extack);
 void (*dellink)(struct net_device *dev,
        struct list_head *head);

 size_t (*get_size)(const struct net_device *dev);
 int (*fill_info)(struct sk_buff *skb,
          const struct net_device *dev);

 size_t (*get_xstats_size)(const struct net_device *dev);
 int (*fill_xstats)(struct sk_buff *skb,
            const struct net_device *dev);
 unsigned int (*get_num_tx_queues)(void);
 unsigned int (*get_num_rx_queues)(void);

 unsigned int slave_maxtype;
 const struct nla_policy *slave_policy;
 int (*slave_changelink)(struct net_device *dev,
          struct net_device *slave_dev,
          struct nlattr *tb[],
          struct nlattr *data[],
          struct netlink_ext_ack *extack);
 size_t (*get_slave_size)(const struct net_device *dev,
        const struct net_device *slave_dev);
 int (*fill_slave_info)(struct sk_buff *skb,
         const struct net_device *dev,
         const struct net_device *slave_dev);
 struct net *(*get_link_net)(const struct net_device *dev);
 size_t (*get_linkxstats_size)(const struct net_device *dev,
             int attr);
 int (*fill_linkxstats)(struct sk_buff *skb,
         const struct net_device *dev,
         int *prividx, int attr);
};

int __rtnl_link_register(struct rtnl_link_ops *ops);
void __rtnl_link_unregister(struct rtnl_link_ops *ops);

int rtnl_link_register(struct rtnl_link_ops *ops);
void rtnl_link_unregister(struct rtnl_link_ops *ops);
# 161 "../include/net/rtnetlink.h"
struct rtnl_af_ops {
 struct list_head list;
 int family;

 int (*fill_link_af)(struct sk_buff *skb,
      const struct net_device *dev,
      u32 ext_filter_mask);
 size_t (*get_link_af_size)(const struct net_device *dev,
          u32 ext_filter_mask);

 int (*validate_link_af)(const struct net_device *dev,
          const struct nlattr *attr,
          struct netlink_ext_ack *extack);
 int (*set_link_af)(struct net_device *dev,
            const struct nlattr *attr,
            struct netlink_ext_ack *extack);
 int (*fill_stats_af)(struct sk_buff *skb,
       const struct net_device *dev);
 size_t (*get_stats_af_size)(const struct net_device *dev);
};

void rtnl_af_register(struct rtnl_af_ops *ops);
void rtnl_af_unregister(struct rtnl_af_ops *ops);

struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[]);
struct net_device *rtnl_create_link(struct net *net, const char *ifname,
        unsigned char name_assign_type,
        const struct rtnl_link_ops *ops,
        struct nlattr *tb[],
        struct netlink_ext_ack *extack);
int rtnl_delete_link(struct net_device *dev, u32 portid, const struct nlmsghdr *nlh);
int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm,
   u32 portid, const struct nlmsghdr *nlh);

int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer,
        struct netlink_ext_ack *exterr);
struct net *rtnl_get_net_ns_capable(struct sock *sk, int netnsid);
# 32 "../include/net/neighbour.h" 2
# 41 "../include/net/neighbour.h"
struct neighbour;

enum {
 NEIGH_VAR_MCAST_PROBES,
 NEIGH_VAR_UCAST_PROBES,
 NEIGH_VAR_APP_PROBES,
 NEIGH_VAR_MCAST_REPROBES,
 NEIGH_VAR_RETRANS_TIME,
 NEIGH_VAR_BASE_REACHABLE_TIME,
 NEIGH_VAR_DELAY_PROBE_TIME,
 NEIGH_VAR_INTERVAL_PROBE_TIME_MS,
 NEIGH_VAR_GC_STALETIME,
 NEIGH_VAR_QUEUE_LEN_BYTES,
 NEIGH_VAR_PROXY_QLEN,
 NEIGH_VAR_ANYCAST_DELAY,
 NEIGH_VAR_PROXY_DELAY,
 NEIGH_VAR_LOCKTIME,


 NEIGH_VAR_QUEUE_LEN,
 NEIGH_VAR_RETRANS_TIME_MS,
 NEIGH_VAR_BASE_REACHABLE_TIME_MS,

 NEIGH_VAR_GC_INTERVAL,
 NEIGH_VAR_GC_THRESH1,
 NEIGH_VAR_GC_THRESH2,
 NEIGH_VAR_GC_THRESH3,
 NEIGH_VAR_MAX
};

struct neigh_parms {
 possible_net_t net;
 struct net_device *dev;
 netdevice_tracker dev_tracker;
 struct list_head list;
 int (*neigh_setup)(struct neighbour *);
 struct neigh_table *tbl;

 void *sysctl_table;

 int dead;
 refcount_t refcnt;
 struct callback_head callback_head;

 int reachable_time;
 u32 qlen;
 int data[(NEIGH_VAR_LOCKTIME + 1)];
 unsigned long data_state[((((NEIGH_VAR_LOCKTIME + 1)) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_var_set(struct neigh_parms *p, int index, int val)
{
 set_bit(index, p->data_state);
 p->data[index] = val;
}
# 105 "../include/net/neighbour.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_parms_data_state_setall(struct neigh_parms *p)
{
 bitmap_fill(p->data_state, (NEIGH_VAR_LOCKTIME + 1));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_parms_data_state_cleanall(struct neigh_parms *p)
{
 bitmap_zero(p->data_state, (NEIGH_VAR_LOCKTIME + 1));
}

struct neigh_statistics {
 unsigned long allocs;
 unsigned long destroys;
 unsigned long hash_grows;

 unsigned long res_failed;

 unsigned long lookups;
 unsigned long hits;

 unsigned long rcv_probes_mcast;
 unsigned long rcv_probes_ucast;

 unsigned long periodic_gc_runs;
 unsigned long forced_gc_runs;

 unsigned long unres_discards;
 unsigned long table_fulls;
};



struct neighbour {
 struct neighbour *next;
 struct neigh_table *tbl;
 struct neigh_parms *parms;
 unsigned long confirmed;
 unsigned long updated;
 rwlock_t lock;
 refcount_t refcnt;
 unsigned int arp_queue_len_bytes;
 struct sk_buff_head arp_queue;
 struct timer_list timer;
 unsigned long used;
 atomic_t probes;
 u8 nud_state;
 u8 type;
 u8 dead;
 u8 protocol;
 u32 flags;
 seqlock_t ha_lock;
 unsigned char ha[((((32)) + ((__typeof__((32)))((sizeof(unsigned long))) - 1)) & ~((__typeof__((32)))((sizeof(unsigned long))) - 1))] __attribute__((__aligned__(8)));
 struct hh_cache hh;
 int (*output)(struct neighbour *, struct sk_buff *);
 const struct neigh_ops *ops;
 struct list_head gc_list;
 struct list_head managed_list;
 struct callback_head rcu;
 struct net_device *dev;
 netdevice_tracker dev_tracker;
 u8 primary_key[];
} ;

struct neigh_ops {
 int family;
 void (*solicit)(struct neighbour *, struct sk_buff *);
 void (*error_report)(struct neighbour *, struct sk_buff *);
 int (*output)(struct neighbour *, struct sk_buff *);
 int (*connected_output)(struct neighbour *, struct sk_buff *);
};

struct pneigh_entry {
 struct pneigh_entry *next;
 possible_net_t net;
 struct net_device *dev;
 netdevice_tracker dev_tracker;
 u32 flags;
 u8 protocol;
 u32 key[];
};







struct neigh_hash_table {
 struct neighbour **hash_buckets;
 unsigned int hash_shift;
 __u32 hash_rnd[4];
 struct callback_head rcu;
};


struct neigh_table {
 int family;
 unsigned int entry_size;
 unsigned int key_len;
 __be16 protocol;
 __u32 (*hash)(const void *pkey,
     const struct net_device *dev,
     __u32 *hash_rnd);
 bool (*key_eq)(const struct neighbour *, const void *pkey);
 int (*constructor)(struct neighbour *);
 int (*pconstructor)(struct pneigh_entry *);
 void (*pdestructor)(struct pneigh_entry *);
 void (*proxy_redo)(struct sk_buff *skb);
 int (*is_multicast)(const void *pkey);
 bool (*allow_add)(const struct net_device *dev,
          struct netlink_ext_ack *extack);
 char *id;
 struct neigh_parms parms;
 struct list_head parms_list;
 int gc_interval;
 int gc_thresh1;
 int gc_thresh2;
 int gc_thresh3;
 unsigned long last_flush;
 struct delayed_work gc_work;
 struct delayed_work managed_work;
 struct timer_list proxy_timer;
 struct sk_buff_head proxy_queue;
 atomic_t entries;
 atomic_t gc_entries;
 struct list_head gc_list;
 struct list_head managed_list;
 rwlock_t lock;
 unsigned long last_rand;
 struct neigh_statistics *stats;
 struct neigh_hash_table *nht;
 struct pneigh_entry **phash_buckets;
};

enum {
 NEIGH_ARP_TABLE = 0,
 NEIGH_ND_TABLE = 1,
 NEIGH_DN_TABLE = 2,
 NEIGH_NR_TABLES,
 NEIGH_LINK_TABLE = NEIGH_NR_TABLES
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int neigh_parms_family(struct neigh_parms *p)
{
 return p->tbl->family;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *neighbour_priv(const struct neighbour *n)
{
 return (char *)n + n->tbl->entry_size;
}
# 277 "../include/net/neighbour.h"
extern const struct nla_policy nda_policy[];

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool neigh_key_eq32(const struct neighbour *n, const void *pkey)
{
 return *(const u32 *)n->primary_key == *(const u32 *)pkey;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool neigh_key_eq128(const struct neighbour *n, const void *pkey)
{
 const u32 *n32 = (const u32 *)n->primary_key;
 const u32 *p32 = pkey;

 return ((n32[0] ^ p32[0]) | (n32[1] ^ p32[1]) |
  (n32[2] ^ p32[2]) | (n32[3] ^ p32[3])) == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *___neigh_lookup_noref(
 struct neigh_table *tbl,
 bool (*key_eq)(const struct neighbour *n, const void *pkey),
 __u32 (*hash)(const void *pkey,
        const struct net_device *dev,
        __u32 *hash_rnd),
 const void *pkey,
 struct net_device *dev)
{
 struct neigh_hash_table *nht = ({ typeof(*(tbl->nht)) *__UNIQUE_ID_rcu390 = (typeof(*(tbl->nht)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_391(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((tbl->nht)) == sizeof(char) || sizeof((tbl->nht)) == sizeof(short) || sizeof((tbl->nht)) == sizeof(int) || sizeof((tbl->nht)) == sizeof(long)) || sizeof((tbl->nht)) == sizeof(long long))) __compiletime_assert_391(); } while (0); (*(const volatile typeof( _Generic(((tbl->nht)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((tbl->nht)))) *)&((tbl->nht))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/neighbour.h", 302, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(tbl->nht)) *)(__UNIQUE_ID_rcu390)); });
 struct neighbour *n;
 u32 hash_val;

 hash_val = hash(pkey, dev, nht->hash_rnd) >> (32 - nht->hash_shift);
 for (n = ({ typeof(*(nht->hash_buckets[hash_val])) *__UNIQUE_ID_rcu392 = (typeof(*(nht->hash_buckets[hash_val])) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_393(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((nht->hash_buckets[hash_val])) == sizeof(char) || sizeof((nht->hash_buckets[hash_val])) == sizeof(short) || sizeof((nht->hash_buckets[hash_val])) == sizeof(int) || sizeof((nht->hash_buckets[hash_val])) == sizeof(long)) || sizeof((nht->hash_buckets[hash_val])) == sizeof(long long))) __compiletime_assert_393(); } while (0); (*(const volatile typeof( _Generic(((nht->hash_buckets[hash_val])), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((nht->hash_buckets[hash_val])))) *)&((nht->hash_buckets[hash_val]))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/neighbour.h", 307, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(nht->hash_buckets[hash_val])) *)(__UNIQUE_ID_rcu392)); });
      n != ((void *)0);
      n = ({ typeof(*(n->next)) *__UNIQUE_ID_rcu394 = (typeof(*(n->next)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_395(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((n->next)) == sizeof(char) || sizeof((n->next)) == sizeof(short) || sizeof((n->next)) == sizeof(int) || sizeof((n->next)) == sizeof(long)) || sizeof((n->next)) == sizeof(long long))) __compiletime_assert_395(); } while (0); (*(const volatile typeof( _Generic(((n->next)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((n->next)))) *)&((n->next))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/neighbour.h", 309, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(n->next)) *)(__UNIQUE_ID_rcu394)); })) {
  if (n->dev == dev && key_eq(n, pkey))
   return n;
 }

 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *__neigh_lookup_noref(struct neigh_table *tbl,
           const void *pkey,
           struct net_device *dev)
{
 return ___neigh_lookup_noref(tbl, tbl->key_eq, tbl->hash, pkey, dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_confirm(struct neighbour *n)
{
 if (n) {
  unsigned long now = jiffies;


  if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_396(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->confirmed) == sizeof(char) || sizeof(n->confirmed) == sizeof(short) || sizeof(n->confirmed) == sizeof(int) || sizeof(n->confirmed) == sizeof(long)) || sizeof(n->confirmed) == sizeof(long long))) __compiletime_assert_396(); } while (0); (*(const volatile typeof( _Generic((n->confirmed), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (n->confirmed))) *)&(n->confirmed)); }) != now)
   do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_397(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->confirmed) == sizeof(char) || sizeof(n->confirmed) == sizeof(short) || sizeof(n->confirmed) == sizeof(int) || sizeof(n->confirmed) == sizeof(long)) || sizeof(n->confirmed) == sizeof(long long))) __compiletime_assert_397(); } while (0); do { *(volatile typeof(n->confirmed) *)&(n->confirmed) = (now); } while (0); } while (0);
 }
}

void neigh_table_init(int index, struct neigh_table *tbl);
int neigh_table_clear(int index, struct neigh_table *tbl);
struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
          struct net_device *dev);
struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
     struct net_device *dev, bool want_ref);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *neigh_create(struct neigh_table *tbl,
          const void *pkey,
          struct net_device *dev)
{
 return __neigh_create(tbl, pkey, dev, true);
}
void neigh_destroy(struct neighbour *neigh);
int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb,
         const bool immediate_ok);
int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new, u32 flags,
   u32 nlmsg_pid);
void __neigh_set_probe_once(struct neighbour *neigh);
bool neigh_remove_one(struct neighbour *ndel, struct neigh_table *tbl);
void neigh_changeaddr(struct neigh_table *tbl, struct net_device *dev);
int neigh_ifdown(struct neigh_table *tbl, struct net_device *dev);
int neigh_carrier_down(struct neigh_table *tbl, struct net_device *dev);
int neigh_resolve_output(struct neighbour *neigh, struct sk_buff *skb);
int neigh_connected_output(struct neighbour *neigh, struct sk_buff *skb);
int neigh_direct_output(struct neighbour *neigh, struct sk_buff *skb);
struct neighbour *neigh_event_ns(struct neigh_table *tbl,
      u8 *lladdr, void *saddr,
      struct net_device *dev);

struct neigh_parms *neigh_parms_alloc(struct net_device *dev,
          struct neigh_table *tbl);
void neigh_parms_release(struct neigh_table *tbl, struct neigh_parms *parms);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct net *neigh_parms_net(const struct neigh_parms *parms)
{
 return read_pnet(&parms->net);
}

unsigned long neigh_rand_reach_time(unsigned long base);

void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p,
      struct sk_buff *skb);
struct pneigh_entry *pneigh_lookup(struct neigh_table *tbl, struct net *net,
       const void *key, struct net_device *dev,
       int creat);
struct pneigh_entry *__pneigh_lookup(struct neigh_table *tbl, struct net *net,
         const void *key, struct net_device *dev);
int pneigh_delete(struct neigh_table *tbl, struct net *net, const void *key,
    struct net_device *dev);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct net *pneigh_net(const struct pneigh_entry *pneigh)
{
 return read_pnet(&pneigh->net);
}

void neigh_app_ns(struct neighbour *n);
void neigh_for_each(struct neigh_table *tbl,
      void (*cb)(struct neighbour *, void *), void *cookie);
void __neigh_for_each_release(struct neigh_table *tbl,
         int (*cb)(struct neighbour *));
int neigh_xmit(int fam, struct net_device *, const void *, struct sk_buff *);

struct neigh_seq_state {
 struct seq_net_private p;
 struct neigh_table *tbl;
 struct neigh_hash_table *nht;
 void *(*neigh_sub_iter)(struct neigh_seq_state *state,
    struct neighbour *n, loff_t *pos);
 unsigned int bucket;
 unsigned int flags;



};
void *neigh_seq_start(struct seq_file *, loff_t *, struct neigh_table *,
        unsigned int);
void *neigh_seq_next(struct seq_file *, void *, loff_t *);
void neigh_seq_stop(struct seq_file *, void *);

int neigh_proc_dointvec(const struct ctl_table *ctl, int write,
   void *buffer, size_t *lenp, loff_t *ppos);
int neigh_proc_dointvec_jiffies(const struct ctl_table *ctl, int write,
    void *buffer,
    size_t *lenp, loff_t *ppos);
int neigh_proc_dointvec_ms_jiffies(const struct ctl_table *ctl, int write,
       void *buffer, size_t *lenp, loff_t *ppos);

int neigh_sysctl_register(struct net_device *dev, struct neigh_parms *p,
     proc_handler *proc_handler);
void neigh_sysctl_unregister(struct neigh_parms *p);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __neigh_parms_put(struct neigh_parms *parms)
{
 refcount_dec(&parms->refcnt);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neigh_parms *neigh_parms_clone(struct neigh_parms *parms)
{
 refcount_inc(&parms->refcnt);
 return parms;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_release(struct neighbour *neigh)
{
 if (refcount_dec_and_test(&neigh->refcnt))
  neigh_destroy(neigh);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour * neigh_clone(struct neighbour *neigh)
{
 if (neigh)
  refcount_inc(&neigh->refcnt);
 return neigh;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) int neigh_event_send_probe(struct neighbour *neigh,
        struct sk_buff *skb,
        const bool immediate_ok)
{
 unsigned long now = jiffies;

 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_398(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(neigh->used) == sizeof(char) || sizeof(neigh->used) == sizeof(short) || sizeof(neigh->used) == sizeof(int) || sizeof(neigh->used) == sizeof(long)) || sizeof(neigh->used) == sizeof(long long))) __compiletime_assert_398(); } while (0); (*(const volatile typeof( _Generic((neigh->used), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (neigh->used))) *)&(neigh->used)); }) != now)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_399(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(neigh->used) == sizeof(char) || sizeof(neigh->used) == sizeof(short) || sizeof(neigh->used) == sizeof(int) || sizeof(neigh->used) == sizeof(long)) || sizeof(neigh->used) == sizeof(long long))) __compiletime_assert_399(); } while (0); do { *(volatile typeof(neigh->used) *)&(neigh->used) = (now); } while (0); } while (0);
 if (!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_400(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(neigh->nud_state) == sizeof(char) || sizeof(neigh->nud_state) == sizeof(short) || sizeof(neigh->nud_state) == sizeof(int) || sizeof(neigh->nud_state) == sizeof(long)) || sizeof(neigh->nud_state) == sizeof(long long))) __compiletime_assert_400(); } while (0); (*(const volatile typeof( _Generic((neigh->nud_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (neigh->nud_state))) *)&(neigh->nud_state)); }) & ((0x80|0x40|0x02) | 0x08 | 0x10)))
  return __neigh_event_send(neigh, skb, immediate_ok);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
{
 return neigh_event_send_probe(neigh, skb, true);
}
# 489 "../include/net/neighbour.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb)
{
 unsigned int hh_alen = 0;
 unsigned int seq;
 unsigned int hh_len;

 do {
  seq = read_seqbegin(&hh->hh_lock);
  hh_len = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_401(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(hh->hh_len) == sizeof(char) || sizeof(hh->hh_len) == sizeof(short) || sizeof(hh->hh_len) == sizeof(int) || sizeof(hh->hh_len) == sizeof(long)) || sizeof(hh->hh_len) == sizeof(long long))) __compiletime_assert_401(); } while (0); (*(const volatile typeof( _Generic((hh->hh_len), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (hh->hh_len))) *)&(hh->hh_len)); });
  if (__builtin_expect(!!(hh_len <= 16), 1)) {
   hh_alen = 16;





   if (__builtin_expect(!!(skb_headroom(skb) >= 16), 1)) {

    memcpy(skb->data - 16, hh->hh_data,
           16);
   }
  } else {
   hh_alen = (((hh_len)+(16 -1))&~(16 - 1));

   if (__builtin_expect(!!(skb_headroom(skb) >= hh_alen), 1)) {
    memcpy(skb->data - hh_alen, hh->hh_data,
           hh_alen);
   }
  }
 } while (read_seqretry(&hh->hh_lock, seq));

 if (({ bool __ret_do_once = !!(skb_headroom(skb) < hh_alen); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/neighbour.h", 520, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); })) {
  kfree_skb(skb);
  return 0x01;
 }

 __skb_push(skb, hh_len);
 return dev_queue_xmit(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int neigh_output(struct neighbour *n, struct sk_buff *skb,
          bool skip_cache)
{
 const struct hh_cache *hh = &n->hh;




 if (!skip_cache &&
     (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_402(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->nud_state) == sizeof(char) || sizeof(n->nud_state) == sizeof(short) || sizeof(n->nud_state) == sizeof(int) || sizeof(n->nud_state) == sizeof(long)) || sizeof(n->nud_state) == sizeof(long long))) __compiletime_assert_402(); } while (0); (*(const volatile typeof( _Generic((n->nud_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (n->nud_state))) *)&(n->nud_state)); }) & (0x80|0x40|0x02)) &&
     ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_403(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(hh->hh_len) == sizeof(char) || sizeof(hh->hh_len) == sizeof(short) || sizeof(hh->hh_len) == sizeof(int) || sizeof(hh->hh_len) == sizeof(long)) || sizeof(hh->hh_len) == sizeof(long long))) __compiletime_assert_403(); } while (0); (*(const volatile typeof( _Generic((hh->hh_len), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (hh->hh_len))) *)&(hh->hh_len)); }))
  return neigh_hh_output(hh, skb);

 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_404(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(n->output) == sizeof(char) || sizeof(n->output) == sizeof(short) || sizeof(n->output) == sizeof(int) || sizeof(n->output) == sizeof(long)) || sizeof(n->output) == sizeof(long long))) __compiletime_assert_404(); } while (0); (*(const volatile typeof( _Generic((n->output), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (n->output))) *)&(n->output)); })(n, skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *
__neigh_lookup(struct neigh_table *tbl, const void *pkey, struct net_device *dev, int creat)
{
 struct neighbour *n = neigh_lookup(tbl, pkey, dev);

 if (n || !creat)
  return n;

 n = neigh_create(tbl, pkey, dev);
 return IS_ERR(n) ? ((void *)0) : n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *
__neigh_lookup_errno(struct neigh_table *tbl, const void *pkey,
  struct net_device *dev)
{
 struct neighbour *n = neigh_lookup(tbl, pkey, dev);

 if (n)
  return n;

 return neigh_create(tbl, pkey, dev);
}

struct neighbour_cb {
 unsigned long sched_next;
 unsigned int flags;
};





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_ha_snapshot(char *dst, const struct neighbour *n,
         const struct net_device *dev)
{
 unsigned int seq;

 do {
  seq = read_seqbegin(&n->ha_lock);
  memcpy(dst, n->ha, dev->addr_len);
 } while (read_seqretry(&n->ha_lock, seq));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void neigh_update_is_router(struct neighbour *neigh, u32 flags,
       int *notify)
{
 u8 ndm_flags = 0;

 ndm_flags |= (flags & ((((1UL))) << (6))) ? (1 << 7) : 0;
 if ((neigh->flags ^ ndm_flags) & (1 << 7)) {
  if (ndm_flags & (1 << 7))
   neigh->flags |= (1 << 7);
  else
   neigh->flags &= ~(1 << 7);
  *notify = 1;
 }
}
# 21 "../include/net/dst.h" 2



struct sk_buff;

struct dst_entry {
 struct net_device *dev;
 struct dst_ops *ops;
 unsigned long _metrics;
 unsigned long expires;

 struct xfrm_state *xfrm;



 int (*input)(struct sk_buff *);
 int (*output)(struct net *net, struct sock *sk, struct sk_buff *skb);

 unsigned short flags;
# 56 "../include/net/dst.h"
 short obsolete;




 unsigned short header_len;
 unsigned short trailer_len;
# 71 "../include/net/dst.h"
 int __use;
 unsigned long lastuse;
 struct callback_head callback_head;
 short error;
 short __pad;
 __u32 tclassid;

 struct lwtunnel_state *lwtstate;
 rcuref_t __rcuref;

 netdevice_tracker dev_tracker;
# 90 "../include/net/dst.h"
 struct list_head rt_uncached;
 struct uncached_list *rt_uncached_list;



};

struct dst_metrics {
 u32 metrics[(__RTAX_MAX - 1)];
 refcount_t refcnt;
} __attribute__((__aligned__(4)));
extern const struct dst_metrics dst_default_metrics;

u32 *dst_cow_metrics_generic(struct dst_entry *dst, unsigned long old);
# 112 "../include/net/dst.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dst_metrics_read_only(const struct dst_entry *dst)
{
 return dst->_metrics & 0x1UL;
}

void __dst_destroy_metrics_generic(struct dst_entry *dst, unsigned long old);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_destroy_metrics_generic(struct dst_entry *dst)
{
 unsigned long val = dst->_metrics;
 if (!(val & 0x1UL))
  __dst_destroy_metrics_generic(dst, val);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 *dst_metrics_write_ptr(struct dst_entry *dst)
{
 unsigned long p = dst->_metrics;

 do { if (__builtin_expect(!!(!p), 0)) do { ({ do {} while (0); _printk("BUG: failure at %s:%d/%s()!\n", "include/net/dst.h", 130, __func__); }); do { } while (0); panic("BUG!"); } while (0); } while (0);

 if (p & 0x1UL)
  return dst->ops->cow_metrics(dst, p);
 return ((u32 *)((p) & ~0x3UL));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_init_metrics(struct dst_entry *dst,
        const u32 *src_metrics,
        bool read_only)
{
 dst->_metrics = ((unsigned long) src_metrics) |
  (read_only ? 0x1UL : 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_copy_metrics(struct dst_entry *dest, const struct dst_entry *src)
{
 u32 *dst_metrics = dst_metrics_write_ptr(dest);

 if (dst_metrics) {
  u32 *src_metrics = ((u32 *)(((src)->_metrics) & ~0x3UL));

  memcpy(dst_metrics, src_metrics, (__RTAX_MAX - 1) * sizeof(u32));
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 *dst_metrics_ptr(struct dst_entry *dst)
{
 return ((u32 *)(((dst)->_metrics) & ~0x3UL));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32
dst_metric_raw(const struct dst_entry *dst, const int metric)
{
 u32 *p = ((u32 *)(((dst)->_metrics) & ~0x3UL));

 return p[metric-1];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32
dst_metric(const struct dst_entry *dst, const int metric)
{
 ({ bool __ret_do_once = !!(metric == RTAX_HOPLIMIT || metric == RTAX_ADVMSS || metric == RTAX_MTU); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/dst.h", 177, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });


 return dst_metric_raw(dst, metric);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32
dst_metric_advmss(const struct dst_entry *dst)
{
 u32 advmss = dst_metric_raw(dst, RTAX_ADVMSS);

 if (!advmss)
  advmss = dst->ops->default_advmss(dst);

 return advmss;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_metric_set(struct dst_entry *dst, int metric, u32 val)
{
 u32 *p = dst_metrics_write_ptr(dst);

 if (p)
  p[metric-1] = val;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32
dst_feature(const struct dst_entry *dst, u32 feature)
{
 return dst_metric(dst, RTAX_FEATURES) & feature;
}

                                                                         ;
                                                                          ;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 dst_mtu(const struct dst_entry *dst)
{
 return dst->ops->mtu(dst);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long dst_metric_rtt(const struct dst_entry *dst, int metric)
{
 return msecs_to_jiffies(dst_metric(dst, metric));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
dst_metric_locked(const struct dst_entry *dst, int metric)
{
 return dst_metric(dst, RTAX_LOCK) & (1 << metric);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_hold(struct dst_entry *dst)
{




 do { __attribute__((__noreturn__)) extern void __compiletime_assert_405(void) __attribute__((__error__("BUILD_BUG_ON failed: " "offsetof(struct dst_entry, __rcuref) & 63"))); if (!(!(__builtin_offsetof(struct dst_entry, __rcuref) & 63))) __compiletime_assert_405(); } while (0);
 ({ int __ret_warn_on = !!(!rcuref_get(&dst->__rcuref)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/dst.h", 238, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_use_noref(struct dst_entry *dst, unsigned long time)
{
 if (__builtin_expect(!!(time != dst->lastuse), 0)) {
  dst->__use++;
  dst->lastuse = time;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dst_entry *dst_clone(struct dst_entry *dst)
{
 if (dst)
  dst_hold(dst);
 return dst;
}

void dst_release(struct dst_entry *dst);

void dst_release_immediate(struct dst_entry *dst);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void refdst_drop(unsigned long refdst)
{
 if (!(refdst & 1UL))
  dst_release((struct dst_entry *)(refdst & ~(1UL)));
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_drop(struct sk_buff *skb)
{
 if (skb->_skb_refdst) {
  refdst_drop(skb->_skb_refdst);
  skb->_skb_refdst = 0UL;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_dst_copy(struct sk_buff *nskb, unsigned long refdst)
{
 nskb->slow_gro |= !!refdst;
 nskb->_skb_refdst = refdst;
 if (!(nskb->_skb_refdst & 1UL))
  dst_clone(skb_dst(nskb));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_copy(struct sk_buff *nskb, const struct sk_buff *oskb)
{
 __skb_dst_copy(nskb, oskb->_skb_refdst);
}
# 300 "../include/net/dst.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dst_hold_safe(struct dst_entry *dst)
{
 return rcuref_get(&dst->__rcuref);
}
# 312 "../include/net/dst.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skb_dst_force(struct sk_buff *skb)
{
 if (skb_dst_is_noref(skb)) {
  struct dst_entry *dst = skb_dst(skb);

  ({ int __ret_warn_on = !!(!rcu_read_lock_held()); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/dst.h", 317, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  if (!dst_hold_safe(dst))
   dst = ((void *)0);

  skb->_skb_refdst = (unsigned long)dst;
  skb->slow_gro |= !!dst;
 }

 return skb->_skb_refdst != 0UL;
}
# 338 "../include/net/dst.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
       struct net *net)
{
 skb->dev = dev;






 skb_clear_hash_if_not_l4(skb);
 skb_set_queue_mapping(skb, 0);
 skb_scrub_packet(skb, !net_eq(net, dev_net(dev)));
}
# 363 "../include/net/dst.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
     struct net *net)
{
 atomic_long_inc(&(dev)->stats.__rx_packets);
 atomic_long_add((skb->len), &(dev)->stats.__rx_bytes);
 __skb_tunnel_rx(skb, dev, net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 dst_tclassid(const struct sk_buff *skb)
{

 const struct dst_entry *dst;

 dst = skb_dst(skb);
 if (dst)
  return dst->tclassid;

 return 0;
}

int dst_discard_out(struct net *net, struct sock *sk, struct sk_buff *skb);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_discard(struct sk_buff *skb)
{
 return dst_discard_out(&init_net, skb->sk, skb);
}
void *dst_alloc(struct dst_ops *ops, struct net_device *dev,
  int initial_obsolete, unsigned short flags);
void dst_init(struct dst_entry *dst, struct dst_ops *ops,
       struct net_device *dev, int initial_obsolete,
       unsigned short flags);
void dst_dev_put(struct dst_entry *dst);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_confirm(struct dst_entry *dst)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, const void *daddr)
{
 struct neighbour *n = dst->ops->neigh_lookup(dst, ((void *)0), daddr);
 return IS_ERR(n) ? ((void *)0) : n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst,
           struct sk_buff *skb)
{
 struct neighbour *n;

 if (({ bool __ret_do_once = !!(!dst->ops->neigh_lookup); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/dst.h", 410, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return ((void *)0);

 n = dst->ops->neigh_lookup(dst, skb, ((void *)0));

 return IS_ERR(n) ? ((void *)0) : n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_confirm_neigh(const struct dst_entry *dst,
         const void *daddr)
{
 if (dst->ops->confirm_neigh)
  dst->ops->confirm_neigh(dst, daddr);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_link_failure(struct sk_buff *skb)
{
 struct dst_entry *dst = skb_dst(skb);
 if (dst && dst->ops && dst->ops->link_failure)
  dst->ops->link_failure(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_set_expires(struct dst_entry *dst, int timeout)
{
 unsigned long expires = jiffies + timeout;

 if (expires == 0)
  expires = 1;

 if (dst->expires == 0 || (({ unsigned long __dummy; typeof(dst->expires) __dummy2; (void)(&__dummy == &__dummy2); 1; }) && ({ unsigned long __dummy; typeof(expires) __dummy2; (void)(&__dummy == &__dummy2); 1; }) && ((long)((expires) - (dst->expires)) < 0)))
  dst->expires = expires;
}


                        ;

                        ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_output(struct net *net, struct sock *sk, struct sk_buff *skb)
{
 return skb_dst(skb)->output(net, sk, skb);


}

                                                          ;
                                                                 ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int dst_input(struct sk_buff *skb)
{
 return skb_dst(skb)->input(skb);

}


              ;

               ;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dst_entry *dst_check(struct dst_entry *dst, u32 cookie)
{
 if (dst->obsolete)
  dst = dst->ops->check(dst, cookie);

 return dst;
}


enum {
 XFRM_LOOKUP_ICMP = 1 << 0,
 XFRM_LOOKUP_QUEUE = 1 << 1,
 XFRM_LOOKUP_KEEP_DST_REF = 1 << 2,
};

struct flowi;
# 517 "../include/net/dst.h"
struct dst_entry *xfrm_lookup(struct net *net, struct dst_entry *dst_orig,
         const struct flowi *fl, const struct sock *sk,
         int flags);

struct dst_entry *xfrm_lookup_with_ifid(struct net *net,
     struct dst_entry *dst_orig,
     const struct flowi *fl,
     const struct sock *sk, int flags,
     u32 if_id);

struct dst_entry *xfrm_lookup_route(struct net *net, struct dst_entry *dst_orig,
        const struct flowi *fl, const struct sock *sk,
        int flags);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
{
 return dst->xfrm;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_update_pmtu(struct sk_buff *skb, u32 mtu)
{
 struct dst_entry *dst = skb_dst(skb);

 if (dst && dst->ops->update_pmtu)
  dst->ops->update_pmtu(dst, ((void *)0), skb, mtu, true);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu)
{
 struct dst_entry *dst = skb_dst(skb);

 if (dst && dst->ops->update_pmtu)
  dst->ops->update_pmtu(dst, ((void *)0), skb, mtu, false);
}

struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie);
void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
          struct sk_buff *skb, u32 mtu, bool confirm_neigh);
void dst_blackhole_redirect(struct dst_entry *dst, struct sock *sk,
       struct sk_buff *skb);
u32 *dst_blackhole_cow_metrics(struct dst_entry *dst, unsigned long old);
struct neighbour *dst_blackhole_neigh_lookup(const struct dst_entry *dst,
          struct sk_buff *skb,
          const void *daddr);
unsigned int dst_blackhole_mtu(const struct dst_entry *dst);
# 67 "../include/net/sock.h" 2

# 1 "../include/net/tcp_states.h" 1
# 12 "../include/net/tcp_states.h"
enum {
 TCP_ESTABLISHED = 1,
 TCP_SYN_SENT,
 TCP_SYN_RECV,
 TCP_FIN_WAIT1,
 TCP_FIN_WAIT2,
 TCP_TIME_WAIT,
 TCP_CLOSE,
 TCP_CLOSE_WAIT,
 TCP_LAST_ACK,
 TCP_LISTEN,
 TCP_CLOSING,
 TCP_NEW_SYN_RECV,
 TCP_BOUND_INACTIVE,

 TCP_MAX_STATES
};





enum {
 TCPF_ESTABLISHED = (1 << TCP_ESTABLISHED),
 TCPF_SYN_SENT = (1 << TCP_SYN_SENT),
 TCPF_SYN_RECV = (1 << TCP_SYN_RECV),
 TCPF_FIN_WAIT1 = (1 << TCP_FIN_WAIT1),
 TCPF_FIN_WAIT2 = (1 << TCP_FIN_WAIT2),
 TCPF_TIME_WAIT = (1 << TCP_TIME_WAIT),
 TCPF_CLOSE = (1 << TCP_CLOSE),
 TCPF_CLOSE_WAIT = (1 << TCP_CLOSE_WAIT),
 TCPF_LAST_ACK = (1 << TCP_LAST_ACK),
 TCPF_LISTEN = (1 << TCP_LISTEN),
 TCPF_CLOSING = (1 << TCP_CLOSING),
 TCPF_NEW_SYN_RECV = (1 << TCP_NEW_SYN_RECV),
 TCPF_BOUND_INACTIVE = (1 << TCP_BOUND_INACTIVE),
};
# 69 "../include/net/sock.h" 2
# 1 "../include/linux/net_tstamp.h" 1
# 16 "../include/linux/net_tstamp.h"
enum hwtstamp_source {
 HWTSTAMP_SOURCE_UNSPEC,
 HWTSTAMP_SOURCE_NETDEV,
 HWTSTAMP_SOURCE_PHYLIB,
};
# 39 "../include/linux/net_tstamp.h"
struct kernel_hwtstamp_config {
 int flags;
 int tx_type;
 int rx_filter;
 struct ifreq *ifr;
 bool copied_to_user;
 enum hwtstamp_source source;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hwtstamp_config_to_kernel(struct kernel_hwtstamp_config *kernel_cfg,
          const struct hwtstamp_config *cfg)
{
 kernel_cfg->flags = cfg->flags;
 kernel_cfg->tx_type = cfg->tx_type;
 kernel_cfg->rx_filter = cfg->rx_filter;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void hwtstamp_config_from_kernel(struct hwtstamp_config *cfg,
            const struct kernel_hwtstamp_config *kernel_cfg)
{
 cfg->flags = kernel_cfg->flags;
 cfg->tx_type = kernel_cfg->tx_type;
 cfg->rx_filter = kernel_cfg->rx_filter;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool kernel_hwtstamp_config_changed(const struct kernel_hwtstamp_config *a,
        const struct kernel_hwtstamp_config *b)
{
 return a->flags != b->flags ||
        a->tx_type != b->tx_type ||
        a->rx_filter != b->rx_filter;
}
# 70 "../include/net/sock.h" 2
# 1 "../include/net/l3mdev.h" 1
# 11 "../include/net/l3mdev.h"
# 1 "../include/net/fib_rules.h" 1







# 1 "../include/uapi/linux/fib_rules.h" 1
# 19 "../include/uapi/linux/fib_rules.h"
struct fib_rule_hdr {
 __u8 family;
 __u8 dst_len;
 __u8 src_len;
 __u8 tos;

 __u8 table;
 __u8 res1;
 __u8 res2;
 __u8 action;

 __u32 flags;
};

struct fib_rule_uid_range {
 __u32 start;
 __u32 end;
};

struct fib_rule_port_range {
 __u16 start;
 __u16 end;
};

enum {
 FRA_UNSPEC,
 FRA_DST,
 FRA_SRC,
 FRA_IIFNAME,

 FRA_GOTO,
 FRA_UNUSED2,
 FRA_PRIORITY,
 FRA_UNUSED3,
 FRA_UNUSED4,
 FRA_UNUSED5,
 FRA_FWMARK,
 FRA_FLOW,
 FRA_TUN_ID,
 FRA_SUPPRESS_IFGROUP,
 FRA_SUPPRESS_PREFIXLEN,
 FRA_TABLE,
 FRA_FWMASK,
 FRA_OIFNAME,
 FRA_PAD,
 FRA_L3MDEV,
 FRA_UID_RANGE,
 FRA_PROTOCOL,
 FRA_IP_PROTO,
 FRA_SPORT_RANGE,
 FRA_DPORT_RANGE,
 __FRA_MAX
};



enum {
 FR_ACT_UNSPEC,
 FR_ACT_TO_TBL,
 FR_ACT_GOTO,
 FR_ACT_NOP,
 FR_ACT_RES3,
 FR_ACT_RES4,
 FR_ACT_BLACKHOLE,
 FR_ACT_UNREACHABLE,
 FR_ACT_PROHIBIT,
 __FR_ACT_MAX,
};
# 9 "../include/net/fib_rules.h" 2



# 1 "../include/net/fib_notifier.h" 1







struct module;

struct fib_notifier_info {
 int family;
 struct netlink_ext_ack *extack;
};

enum fib_event_type {
 FIB_EVENT_ENTRY_REPLACE,
 FIB_EVENT_ENTRY_APPEND,
 FIB_EVENT_ENTRY_ADD,
 FIB_EVENT_ENTRY_DEL,
 FIB_EVENT_RULE_ADD,
 FIB_EVENT_RULE_DEL,
 FIB_EVENT_NH_ADD,
 FIB_EVENT_NH_DEL,
 FIB_EVENT_VIF_ADD,
 FIB_EVENT_VIF_DEL,
};

struct fib_notifier_ops {
 int family;
 struct list_head list;
 unsigned int (*fib_seq_read)(struct net *net);
 int (*fib_dump)(struct net *net, struct notifier_block *nb,
   struct netlink_ext_ack *extack);
 struct module *owner;
 struct callback_head rcu;
};

int call_fib_notifier(struct notifier_block *nb,
        enum fib_event_type event_type,
        struct fib_notifier_info *info);
int call_fib_notifiers(struct net *net, enum fib_event_type event_type,
         struct fib_notifier_info *info);
int register_fib_notifier(struct net *net, struct notifier_block *nb,
     void (*cb)(struct notifier_block *nb),
     struct netlink_ext_ack *extack);
int unregister_fib_notifier(struct net *net, struct notifier_block *nb);
struct fib_notifier_ops *
fib_notifier_ops_register(const struct fib_notifier_ops *tmpl, struct net *net);
void fib_notifier_ops_unregister(struct fib_notifier_ops *ops);
# 13 "../include/net/fib_rules.h" 2


struct fib_kuid_range {
 kuid_t start;
 kuid_t end;
};

struct fib_rule {
 struct list_head list;
 int iifindex;
 int oifindex;
 u32 mark;
 u32 mark_mask;
 u32 flags;
 u32 table;
 u8 action;
 u8 l3mdev;
 u8 proto;
 u8 ip_proto;
 u32 target;
 __be64 tun_id;
 struct fib_rule *ctarget;
 struct net *fr_net;

 refcount_t refcnt;
 u32 pref;
 int suppress_ifgroup;
 int suppress_prefixlen;
 char iifname[16];
 char oifname[16];
 struct fib_kuid_range uid_range;
 struct fib_rule_port_range sport_range;
 struct fib_rule_port_range dport_range;
 struct callback_head rcu;
};

struct fib_lookup_arg {
 void *lookup_ptr;
 const void *lookup_data;
 void *result;
 struct fib_rule *rule;
 u32 table;
 int flags;


};

struct fib_rules_ops {
 int family;
 struct list_head list;
 int rule_size;
 int addr_size;
 int unresolved_rules;
 int nr_goto_rules;
 unsigned int fib_rules_seq;

 int (*action)(struct fib_rule *,
       struct flowi *, int,
       struct fib_lookup_arg *);
 bool (*suppress)(struct fib_rule *, int,
         struct fib_lookup_arg *);
 int (*match)(struct fib_rule *,
      struct flowi *, int);
 int (*configure)(struct fib_rule *,
          struct sk_buff *,
          struct fib_rule_hdr *,
          struct nlattr **,
          struct netlink_ext_ack *);
 int (*delete)(struct fib_rule *);
 int (*compare)(struct fib_rule *,
        struct fib_rule_hdr *,
        struct nlattr **);
 int (*fill)(struct fib_rule *, struct sk_buff *,
     struct fib_rule_hdr *);
 size_t (*nlmsg_payload)(struct fib_rule *);



 void (*flush_cache)(struct fib_rules_ops *ops);

 int nlgroup;
 struct list_head rules_list;
 struct module *owner;
 struct net *fro_net;
 struct callback_head rcu;
};

struct fib_rule_notifier_info {
 struct fib_notifier_info info;
 struct fib_rule *rule;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fib_rule_get(struct fib_rule *rule)
{
 refcount_inc(&rule->refcnt);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fib_rule_put(struct fib_rule *rule)
{
 if (refcount_dec_and_test(&rule->refcnt))
  do { typeof (rule) ___p = (rule); if (___p) { do { __attribute__((__noreturn__)) extern void __compiletime_assert_406(void) __attribute__((__error__("BUILD_BUG_ON failed: " "!__is_kvfree_rcu_offset(offsetof(typeof(*(rule)), rcu))"))); if (!(!(!((__builtin_offsetof(typeof(*(rule)), rcu)) < 4096)))) __compiletime_assert_406(); } while (0); kvfree_call_rcu(&((___p)->rcu), (void *) (___p)); } } while (0);
}
# 123 "../include/net/fib_rules.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 fib_rule_get_table(struct fib_rule *rule,
         struct fib_lookup_arg *arg)
{
 return rule->table;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 frh_get_table(struct fib_rule_hdr *frh, struct nlattr **nla)
{
 if (nla[FRA_TABLE])
  return nla_get_u32(nla[FRA_TABLE]);
 return frh->table;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib_rule_port_range_set(const struct fib_rule_port_range *range)
{
 return range->start != 0 && range->end != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib_rule_port_inrange(const struct fib_rule_port_range *a,
      __be16 port)
{
 return (__u16)(__builtin_constant_p(( __u16)(__be16)(port)) ? ((__u16)( (((__u16)(( __u16)(__be16)(port)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(port)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(port))) >= a->start &&
  (__u16)(__builtin_constant_p(( __u16)(__be16)(port)) ? ((__u16)( (((__u16)(( __u16)(__be16)(port)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(port)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(port))) <= a->end;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib_rule_port_range_valid(const struct fib_rule_port_range *a)
{
 return a->start != 0 && a->end != 0 && a->end < 0xffff &&
  a->start <= a->end;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib_rule_port_range_compare(struct fib_rule_port_range *a,
            struct fib_rule_port_range *b)
{
 return a->start == b->start &&
  a->end == b->end;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib_rule_requires_fldissect(struct fib_rule *rule)
{
 return rule->iifindex != 1 && (rule->ip_proto ||
  fib_rule_port_range_set(&rule->sport_range) ||
  fib_rule_port_range_set(&rule->dport_range));
}

struct fib_rules_ops *fib_rules_register(const struct fib_rules_ops *,
      struct net *);
void fib_rules_unregister(struct fib_rules_ops *);

int fib_rules_lookup(struct fib_rules_ops *, struct flowi *, int flags,
       struct fib_lookup_arg *);
int fib_default_rule_add(struct fib_rules_ops *, u32 pref, u32 table);
bool fib_rule_matchall(const struct fib_rule *rule);
int fib_rules_dump(struct net *net, struct notifier_block *nb, int family,
     struct netlink_ext_ack *extack);
unsigned int fib_rules_seq_read(struct net *net, int family);

int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh,
     struct netlink_ext_ack *extack);
int fib_nl_delrule(struct sk_buff *skb, struct nlmsghdr *nlh,
     struct netlink_ext_ack *extack);


                                      ;

                                      ;



                                   ;


                                   ;



                                  ;


                                  ;
# 12 "../include/net/l3mdev.h" 2

enum l3mdev_type {
 L3MDEV_TYPE_UNSPEC,
 L3MDEV_TYPE_VRF,
 __L3MDEV_TYPE_MAX
};



typedef int (*lookup_by_table_id_t)(struct net *net, u32 table_d);
# 35 "../include/net/l3mdev.h"
struct l3mdev_ops {
 u32 (*l3mdev_fib_table)(const struct net_device *dev);
 struct sk_buff * (*l3mdev_l3_rcv)(struct net_device *dev,
       struct sk_buff *skb, u16 proto);
 struct sk_buff * (*l3mdev_l3_out)(struct net_device *dev,
       struct sock *sk, struct sk_buff *skb,
       u16 proto);


 struct dst_entry * (*l3mdev_link_scope_lookup)(const struct net_device *dev,
       struct flowi6 *fl6);
};
# 223 "../include/net/l3mdev.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int l3mdev_master_ifindex_rcu(const struct net_device *dev)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int l3mdev_master_ifindex(struct net_device *dev)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int l3mdev_master_ifindex_by_index(struct net *net, int ifindex)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int l3mdev_master_upper_ifindex_by_index_rcu(struct net *net, int ifindex)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int l3mdev_master_upper_ifindex_by_index(struct net *net, int ifindex)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct net_device *l3mdev_master_dev_rcu(const struct net_device *dev)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 l3mdev_fib_table_rcu(const struct net_device *dev)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 l3mdev_fib_table(const struct net_device *dev)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 l3mdev_fib_table_by_index(struct net *net, int ifindex)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool netif_index_is_l3_master(struct net *net, int ifindex)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct dst_entry *l3mdev_link_scope_lookup(struct net *net, struct flowi6 *fl6)
{
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct sk_buff *l3mdev_ip_rcv(struct sk_buff *skb)
{
 return skb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct sk_buff *l3mdev_ip6_rcv(struct sk_buff *skb)
{
 return skb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct sk_buff *l3mdev_ip_out(struct sock *sk, struct sk_buff *skb)
{
 return skb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct sk_buff *l3mdev_ip6_out(struct sock *sk, struct sk_buff *skb)
{
 return skb;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int l3mdev_table_lookup_register(enum l3mdev_type l3type,
     lookup_by_table_id_t fn)
{
 return -95;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void l3mdev_table_lookup_unregister(enum l3mdev_type l3type,
        lookup_by_table_id_t fn)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int l3mdev_ifindex_lookup_by_table_id(enum l3mdev_type l3type, struct net *net,
          u32 table_id)
{
 return -19;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int l3mdev_fib_rule_match(struct net *net, struct flowi *fl,
     struct fib_lookup_arg *arg)
{
 return 1;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void l3mdev_update_flow(struct net *net, struct flowi *fl)
{
}
# 71 "../include/net/sock.h" 2
# 83 "../include/net/sock.h"
typedef struct {
 spinlock_t slock;
 int owned;
 wait_queue_head_t wq;







 struct lockdep_map dep_map;

} socket_lock_t;

struct sock;
struct proto;
struct net;

typedef __u32 __portpair;
typedef __u64 __addrpair;
# 150 "../include/net/sock.h"
struct sock_common {
 union {
  __addrpair skc_addrpair;
  struct {
   __be32 skc_daddr;
   __be32 skc_rcv_saddr;
  };
 };
 union {
  unsigned int skc_hash;
  __u16 skc_u16hashes[2];
 };

 union {
  __portpair skc_portpair;
  struct {
   __be16 skc_dport;
   __u16 skc_num;
  };
 };

 unsigned short skc_family;
 volatile unsigned char skc_state;
 unsigned char skc_reuse:4;
 unsigned char skc_reuseport:1;
 unsigned char skc_ipv6only:1;
 unsigned char skc_net_refcnt:1;
 int skc_bound_dev_if;
 union {
  struct hlist_node skc_bind_node;
  struct hlist_node skc_portaddr_node;
 };
 struct proto *skc_prot;
 possible_net_t skc_net;


 struct in6_addr skc_v6_daddr;
 struct in6_addr skc_v6_rcv_saddr;


 atomic64_t skc_cookie;






 union {
  unsigned long skc_flags;
  struct sock *skc_listener;
  struct inet_timewait_death_row *skc_tw_dr;
 };





 int skc_dontcopy_begin[0];

 union {
  struct hlist_node skc_node;
  struct hlist_nulls_node skc_nulls_node;
 };
 unsigned short skc_tx_queue_mapping;

 unsigned short skc_rx_queue_mapping;

 union {
  int skc_incoming_cpu;
  u32 skc_rcv_wnd;
  u32 skc_tw_rcv_nxt;
 };

 refcount_t skc_refcnt;

 int skc_dontcopy_end[0];
 union {
  u32 skc_rxhash;
  u32 skc_window_clamp;
  u32 skc_tw_snd_nxt;
 };

};

struct bpf_local_storage;
struct sk_filter;
# 341 "../include/net/sock.h"
struct sock {




 struct sock_common __sk_common;
# 381 "../include/net/sock.h"
 __u8 __cacheline_group_begin__sock_write_rx[0];

 atomic_t sk_drops;
 __s32 sk_peek_off;
 struct sk_buff_head sk_error_queue;
 struct sk_buff_head sk_receive_queue;
# 395 "../include/net/sock.h"
 struct {
  atomic_t rmem_alloc;
  int len;
  struct sk_buff *head;
  struct sk_buff *tail;
 } sk_backlog;


 __u8 __cacheline_group_end__sock_write_rx[0];

 __u8 __cacheline_group_begin__sock_read_rx[0];

 struct dst_entry *sk_rx_dst;
 int sk_rx_dst_ifindex;
 u32 sk_rx_dst_cookie;


 unsigned int sk_ll_usec;
 unsigned int sk_napi_id;
 u16 sk_busy_poll_budget;
 u8 sk_prefer_busy_poll;

 u8 sk_userlocks;
 int sk_rcvbuf;

 struct sk_filter *sk_filter;
 union {
  struct socket_wq *sk_wq;

  struct socket_wq *sk_wq_raw;

 };

 void (*sk_data_ready)(struct sock *sk);
 long sk_rcvtimeo;
 int sk_rcvlowat;
 __u8 __cacheline_group_end__sock_read_rx[0];

 __u8 __cacheline_group_begin__sock_read_rxtx[0];
 int sk_err;
 struct socket *sk_socket;
 struct mem_cgroup *sk_memcg;

 struct xfrm_policy *sk_policy[2];

 __u8 __cacheline_group_end__sock_read_rxtx[0];

 __u8 __cacheline_group_begin__sock_write_rxtx[0];
 socket_lock_t sk_lock;
 u32 sk_reserved_mem;
 int sk_forward_alloc;
 u32 sk_tsflags;
 __u8 __cacheline_group_end__sock_write_rxtx[0];

 __u8 __cacheline_group_begin__sock_write_tx[0];
 int sk_write_pending;
 atomic_t sk_omem_alloc;
 int sk_sndbuf;

 int sk_wmem_queued;
 refcount_t sk_wmem_alloc;
 unsigned long sk_tsq_flags;
 union {
  struct sk_buff *sk_send_head;
  struct rb_root tcp_rtx_queue;
 };
 struct sk_buff_head sk_write_queue;
 u32 sk_dst_pending_confirm;
 u32 sk_pacing_status;
 struct page_frag sk_frag;
 struct timer_list sk_timer;

 unsigned long sk_pacing_rate;
 atomic_t sk_zckey;
 atomic_t sk_tskey;
 __u8 __cacheline_group_end__sock_write_tx[0];

 __u8 __cacheline_group_begin__sock_read_tx[0];
 unsigned long sk_max_pacing_rate;
 long sk_sndtimeo;
 u32 sk_priority;
 u32 sk_mark;
 struct dst_entry *sk_dst_cache;
 netdev_features_t sk_route_caps;

 struct sk_buff* (*sk_validate_xmit_skb)(struct sock *sk,
       struct net_device *dev,
       struct sk_buff *skb);

 u16 sk_gso_type;
 u16 sk_gso_max_segs;
 unsigned int sk_gso_max_size;
 gfp_t sk_allocation;
 u32 sk_txhash;
 u8 sk_pacing_shift;
 bool sk_use_task_frag;
 __u8 __cacheline_group_end__sock_read_tx[0];





 u8 sk_gso_disabled : 1,
    sk_kern_sock : 1,
    sk_no_check_tx : 1,
    sk_no_check_rx : 1;
 u8 sk_shutdown;
 u16 sk_type;
 u16 sk_protocol;
 unsigned long sk_lingertime;
 struct proto *sk_prot_creator;
 rwlock_t sk_callback_lock;
 int sk_err_soft;
 u32 sk_ack_backlog;
 u32 sk_max_ack_backlog;
 kuid_t sk_uid;
 spinlock_t sk_peer_lock;
 int sk_bind_phc;
 struct pid *sk_peer_pid;
 const struct cred *sk_peer_cred;

 ktime_t sk_stamp;

 seqlock_t sk_stamp_seq;

 int sk_disconnects;

 u8 sk_txrehash;
 u8 sk_clockid;
 u8 sk_txtime_deadline_mode : 1,
    sk_txtime_report_errors : 1,
    sk_txtime_unused : 6;

 void *sk_user_data;



 struct sock_cgroup_data sk_cgrp_data;
 void (*sk_state_change)(struct sock *sk);
 void (*sk_write_space)(struct sock *sk);
 void (*sk_error_report)(struct sock *sk);
 int (*sk_backlog_rcv)(struct sock *sk,
        struct sk_buff *skb);
 void (*sk_destruct)(struct sock *sk);
 struct sock_reuseport *sk_reuseport_cb;

 struct bpf_local_storage *sk_bpf_storage;

 struct callback_head sk_rcu;
 netns_tracker ns_tracker;
};

struct sock_bh_locked {
 struct sock *sock;
 local_lock_t bh_lock;
};

enum sk_pacing {
 SK_PACING_NONE = 0,
 SK_PACING_NEEDED = 1,
 SK_PACING_FQ = 2,
};
# 583 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_user_data_is_nocopy(const struct sock *sk)
{
 return ((uintptr_t)sk->sk_user_data & 1UL);
}
# 600 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *
__locked_read_sk_user_data_with_flags(const struct sock *sk,
          uintptr_t flags)
{
 uintptr_t sk_user_data =
  (uintptr_t)({ typeof(*(((*((void **)&(sk)->sk_user_data))))) *__UNIQUE_ID_rcu407 = (typeof(*(((*((void **)&(sk)->sk_user_data))))) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_408(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(char) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(short) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(int) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(long)) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(long long))) __compiletime_assert_408(); } while (0); (*(const volatile typeof( _Generic(((((*((void **)&(sk)->sk_user_data))))), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((((*((void **)&(sk)->sk_user_data))))))) *)&((((*((void **)&(sk)->sk_user_data)))))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lock_is_held(&(&sk->sk_callback_lock)->dep_map)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 606, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(((*((void **)&(sk)->sk_user_data))))) *)(__UNIQUE_ID_rcu407)); });


 ({ bool __ret_do_once = !!(flags & ~(1UL | 2UL | 4UL)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 608, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 if ((sk_user_data & flags) == flags)
  return (void *)(sk_user_data & ~(1UL | 2UL | 4UL));
 return ((void *)0);
}
# 623 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *
__rcu_dereference_sk_user_data_with_flags(const struct sock *sk,
       uintptr_t flags)
{
 uintptr_t sk_user_data = (uintptr_t)({ typeof(*(((*((void **)&(sk)->sk_user_data))))) *__UNIQUE_ID_rcu409 = (typeof(*(((*((void **)&(sk)->sk_user_data))))) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_410(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(char) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(short) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(int) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(long)) || sizeof((((*((void **)&(sk)->sk_user_data))))) == sizeof(long long))) __compiletime_assert_410(); } while (0); (*(const volatile typeof( _Generic(((((*((void **)&(sk)->sk_user_data))))), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((((*((void **)&(sk)->sk_user_data))))))) *)&((((*((void **)&(sk)->sk_user_data)))))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 627, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(((*((void **)&(sk)->sk_user_data))))) *)(__UNIQUE_ID_rcu409)); });

 ({ bool __ret_do_once = !!(flags & ~(1UL | 2UL | 4UL)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 629, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

 if ((sk_user_data & flags) == flags)
  return (void *)(sk_user_data & ~(1UL | 2UL | 4UL));
 return ((void *)0);
}
# 650 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct net *sock_net(const struct sock *sk)
{
 return read_pnet(&sk->__sk_common.skc_net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void sock_net_set(struct sock *sk, struct net *net)
{
 write_pnet(&sk->__sk_common.skc_net, net);
}
# 673 "../include/net/sock.h"
int sk_set_peek_off(struct sock *sk, int val);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_peek_offset(const struct sock *sk, int flags)
{
 if (__builtin_expect(!!(flags & 2), 0)) {
  return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_411(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_peek_off) == sizeof(char) || sizeof(sk->sk_peek_off) == sizeof(short) || sizeof(sk->sk_peek_off) == sizeof(int) || sizeof(sk->sk_peek_off) == sizeof(long)) || sizeof(sk->sk_peek_off) == sizeof(long long))) __compiletime_assert_411(); } while (0); (*(const volatile typeof( _Generic((sk->sk_peek_off), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_peek_off))) *)&(sk->sk_peek_off)); });
 }

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_peek_offset_bwd(struct sock *sk, int val)
{
 s32 off = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_412(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_peek_off) == sizeof(char) || sizeof(sk->sk_peek_off) == sizeof(short) || sizeof(sk->sk_peek_off) == sizeof(int) || sizeof(sk->sk_peek_off) == sizeof(long)) || sizeof(sk->sk_peek_off) == sizeof(long long))) __compiletime_assert_412(); } while (0); (*(const volatile typeof( _Generic((sk->sk_peek_off), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_peek_off))) *)&(sk->sk_peek_off)); });

 if (__builtin_expect(!!(off >= 0), 0)) {
  off = ({ s32 __UNIQUE_ID_x_413 = (off - val); s32 __UNIQUE_ID_y_414 = (0); ((__UNIQUE_ID_x_413) > (__UNIQUE_ID_y_414) ? (__UNIQUE_ID_x_413) : (__UNIQUE_ID_y_414)); });
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_415(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_peek_off) == sizeof(char) || sizeof(sk->sk_peek_off) == sizeof(short) || sizeof(sk->sk_peek_off) == sizeof(int) || sizeof(sk->sk_peek_off) == sizeof(long)) || sizeof(sk->sk_peek_off) == sizeof(long long))) __compiletime_assert_415(); } while (0); do { *(volatile typeof(sk->sk_peek_off) *)&(sk->sk_peek_off) = (off); } while (0); } while (0);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_peek_offset_fwd(struct sock *sk, int val)
{
 sk_peek_offset_bwd(sk, -val);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_entry(const struct hlist_node *node)
{
 return ({ void *__mptr = (void *)(node); _Static_assert(__builtin_types_compatible_p(typeof(*(node)), typeof(((struct sock *)0)->__sk_common.skc_node)) || __builtin_types_compatible_p(typeof(*(node)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sock *)(__mptr - __builtin_offsetof(struct sock, __sk_common.skc_node))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *__sk_head(const struct hlist_head *head)
{
 return ({ void *__mptr = (void *)(head->first); _Static_assert(__builtin_types_compatible_p(typeof(*(head->first)), typeof(((struct sock *)0)->__sk_common.skc_node)) || __builtin_types_compatible_p(typeof(*(head->first)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sock *)(__mptr - __builtin_offsetof(struct sock, __sk_common.skc_node))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_head(const struct hlist_head *head)
{
 return hlist_empty(head) ? ((void *)0) : __sk_head(head);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *__sk_nulls_head(const struct hlist_nulls_head *head)
{
 return ({ void *__mptr = (void *)(head->first); _Static_assert(__builtin_types_compatible_p(typeof(*(head->first)), typeof(((struct sock *)0)->__sk_common.skc_nulls_node)) || __builtin_types_compatible_p(typeof(*(head->first)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sock *)(__mptr - __builtin_offsetof(struct sock, __sk_common.skc_nulls_node))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_nulls_head(const struct hlist_nulls_head *head)
{
 return hlist_nulls_empty(head) ? ((void *)0) : __sk_nulls_head(head);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_next(const struct sock *sk)
{
 return ({ typeof(sk->__sk_common.skc_node.next) ____ptr = (sk->__sk_common.skc_node.next); ____ptr ? ({ void *__mptr = (void *)(____ptr); _Static_assert(__builtin_types_compatible_p(typeof(*(____ptr)), typeof(((struct sock *)0)->__sk_common.skc_node)) || __builtin_types_compatible_p(typeof(*(____ptr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sock *)(__mptr - __builtin_offsetof(struct sock, __sk_common.skc_node))); }) : ((void *)0); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_nulls_next(const struct sock *sk)
{
 return (!is_a_nulls(sk->__sk_common.skc_nulls_node.next)) ?
  ({ void *__mptr = (void *)(sk->__sk_common.skc_nulls_node.next); _Static_assert(__builtin_types_compatible_p(typeof(*(sk->__sk_common.skc_nulls_node.next)), typeof(((struct sock *)0)->__sk_common.skc_nulls_node)) || __builtin_types_compatible_p(typeof(*(sk->__sk_common.skc_nulls_node.next)), typeof(void)), "pointer type mismatch in container_of()"); ((struct sock *)(__mptr - __builtin_offsetof(struct sock, __sk_common.skc_nulls_node))); }) :

  ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_unhashed(const struct sock *sk)
{
 return hlist_unhashed(&sk->__sk_common.skc_node);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_hashed(const struct sock *sk)
{
 return !sk_unhashed(sk);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_node_init(struct hlist_node *node)
{
 node->pprev = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_del_node(struct sock *sk)
{
 __hlist_del(&sk->__sk_common.skc_node);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __sk_del_node_init(struct sock *sk)
{
 if (sk_hashed(sk)) {
  __sk_del_node(sk);
  sk_node_init(&sk->__sk_common.skc_node);
  return true;
 }
 return false;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void sock_hold(struct sock *sk)
{
 refcount_inc(&sk->__sk_common.skc_refcnt);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__always_inline__)) void __sock_put(struct sock *sk)
{
 refcount_dec(&sk->__sk_common.skc_refcnt);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_del_node_init(struct sock *sk)
{
 bool rc = __sk_del_node_init(sk);

 if (rc) {

  ({ int __ret_warn_on = !!(refcount_read(&sk->__sk_common.skc_refcnt) == 1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 796, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  __sock_put(sk);
 }
 return rc;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __sk_nulls_del_node_init_rcu(struct sock *sk)
{
 if (sk_hashed(sk)) {
  hlist_nulls_del_init_rcu(&sk->__sk_common.skc_nulls_node);
  return true;
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_nulls_del_node_init_rcu(struct sock *sk)
{
 bool rc = __sk_nulls_del_node_init_rcu(sk);

 if (rc) {

  ({ int __ret_warn_on = !!(refcount_read(&sk->__sk_common.skc_refcnt) == 1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 818, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
  __sock_put(sk);
 }
 return rc;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_add_node(struct sock *sk, struct hlist_head *list)
{
 hlist_add_head(&sk->__sk_common.skc_node, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_add_node(struct sock *sk, struct hlist_head *list)
{
 sock_hold(sk);
 __sk_add_node(sk, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
{
 sock_hold(sk);
 if (1 && sk->__sk_common.skc_reuseport &&
     sk->__sk_common.skc_family == 10)
  hlist_add_tail_rcu(&sk->__sk_common.skc_node, list);
 else
  hlist_add_head_rcu(&sk->__sk_common.skc_node, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_add_node_tail_rcu(struct sock *sk, struct hlist_head *list)
{
 sock_hold(sk);
 hlist_add_tail_rcu(&sk->__sk_common.skc_node, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
{
 hlist_nulls_add_head_rcu(&sk->__sk_common.skc_nulls_node, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_nulls_add_node_tail_rcu(struct sock *sk, struct hlist_nulls_head *list)
{
 hlist_nulls_add_tail_rcu(&sk->__sk_common.skc_nulls_node, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
{
 sock_hold(sk);
 __sk_nulls_add_node_rcu(sk, list);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_del_bind_node(struct sock *sk)
{
 __hlist_del(&sk->__sk_common.skc_bind_node);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_add_bind_node(struct sock *sk,
     struct hlist_head *list)
{
 hlist_add_head(&sk->__sk_common.skc_bind_node, list);
}
# 910 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct user_namespace *sk_user_ns(const struct sock *sk)
{




 return sk->sk_socket->file->f_cred->user_ns;
}


enum sock_flags {
 SOCK_DEAD,
 SOCK_DONE,
 SOCK_URGINLINE,
 SOCK_KEEPOPEN,
 SOCK_LINGER,
 SOCK_DESTROY,
 SOCK_BROADCAST,
 SOCK_TIMESTAMP,
 SOCK_ZAPPED,
 SOCK_USE_WRITE_QUEUE,
 SOCK_DBG,
 SOCK_RCVTSTAMP,
 SOCK_RCVTSTAMPNS,
 SOCK_LOCALROUTE,
 SOCK_MEMALLOC,
 SOCK_TIMESTAMPING_RX_SOFTWARE,
 SOCK_FASYNC,
 SOCK_RXQ_OVFL,
 SOCK_ZEROCOPY,
 SOCK_WIFI_STATUS,
 SOCK_NOFCS,



 SOCK_FILTER_LOCKED,
 SOCK_SELECT_ERR_QUEUE,
 SOCK_RCU_FREE,
 SOCK_TXTIME,
 SOCK_XDP,
 SOCK_TSTAMP_NEW,
 SOCK_RCVMARK,
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_copy_flags(struct sock *nsk, const struct sock *osk)
{
 nsk->__sk_common.skc_flags = osk->__sk_common.skc_flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_set_flag(struct sock *sk, enum sock_flags flag)
{
 ((__builtin_constant_p(flag) && __builtin_constant_p((uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sk->__sk_common.skc_flags))) ? generic___set_bit(flag, &sk->__sk_common.skc_flags) : arch___set_bit(flag, &sk->__sk_common.skc_flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_reset_flag(struct sock *sk, enum sock_flags flag)
{
 ((__builtin_constant_p(flag) && __builtin_constant_p((uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sk->__sk_common.skc_flags))) ? generic___clear_bit(flag, &sk->__sk_common.skc_flags) : arch___clear_bit(flag, &sk->__sk_common.skc_flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_valbool_flag(struct sock *sk, enum sock_flags bit,
         int valbool)
{
 if (valbool)
  sock_set_flag(sk, bit);
 else
  sock_reset_flag(sk, bit);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sock_flag(const struct sock *sk, enum sock_flags flag)
{
 return ((__builtin_constant_p(flag) && __builtin_constant_p((uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&sk->__sk_common.skc_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&sk->__sk_common.skc_flags))) ? const_test_bit(flag, &sk->__sk_common.skc_flags) : arch_test_bit(flag, &sk->__sk_common.skc_flags));
}


extern struct static_key_false memalloc_socks_key;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_memalloc_socks(void)
{
 return __builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&memalloc_socks_key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&memalloc_socks_key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&memalloc_socks_key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&memalloc_socks_key)->key) > 0; })), 0);
}

void __receive_sock(struct file *file);
# 1004 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t sk_gfp_mask(const struct sock *sk, gfp_t gfp_mask)
{
 return gfp_mask | (sk->sk_allocation & (( gfp_t)((((1UL))) << (___GFP_MEMALLOC_BIT))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_acceptq_removed(struct sock *sk)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_416(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_ack_backlog) == sizeof(char) || sizeof(sk->sk_ack_backlog) == sizeof(short) || sizeof(sk->sk_ack_backlog) == sizeof(int) || sizeof(sk->sk_ack_backlog) == sizeof(long)) || sizeof(sk->sk_ack_backlog) == sizeof(long long))) __compiletime_assert_416(); } while (0); do { *(volatile typeof(sk->sk_ack_backlog) *)&(sk->sk_ack_backlog) = (sk->sk_ack_backlog - 1); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_acceptq_added(struct sock *sk)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_417(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_ack_backlog) == sizeof(char) || sizeof(sk->sk_ack_backlog) == sizeof(short) || sizeof(sk->sk_ack_backlog) == sizeof(int) || sizeof(sk->sk_ack_backlog) == sizeof(long)) || sizeof(sk->sk_ack_backlog) == sizeof(long long))) __compiletime_assert_417(); } while (0); do { *(volatile typeof(sk->sk_ack_backlog) *)&(sk->sk_ack_backlog) = (sk->sk_ack_backlog + 1); } while (0); } while (0);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_acceptq_is_full(const struct sock *sk)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_418(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_ack_backlog) == sizeof(char) || sizeof(sk->sk_ack_backlog) == sizeof(short) || sizeof(sk->sk_ack_backlog) == sizeof(int) || sizeof(sk->sk_ack_backlog) == sizeof(long)) || sizeof(sk->sk_ack_backlog) == sizeof(long long))) __compiletime_assert_418(); } while (0); (*(const volatile typeof( _Generic((sk->sk_ack_backlog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_ack_backlog))) *)&(sk->sk_ack_backlog)); }) > ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_419(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_max_ack_backlog) == sizeof(char) || sizeof(sk->sk_max_ack_backlog) == sizeof(short) || sizeof(sk->sk_max_ack_backlog) == sizeof(int) || sizeof(sk->sk_max_ack_backlog) == sizeof(long)) || sizeof(sk->sk_max_ack_backlog) == sizeof(long long))) __compiletime_assert_419(); } while (0); (*(const volatile typeof( _Generic((sk->sk_max_ack_backlog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_max_ack_backlog))) *)&(sk->sk_max_ack_backlog)); });
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_stream_min_wspace(const struct sock *sk)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_420(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_wmem_queued) == sizeof(char) || sizeof(sk->sk_wmem_queued) == sizeof(short) || sizeof(sk->sk_wmem_queued) == sizeof(int) || sizeof(sk->sk_wmem_queued) == sizeof(long)) || sizeof(sk->sk_wmem_queued) == sizeof(long long))) __compiletime_assert_420(); } while (0); (*(const volatile typeof( _Generic((sk->sk_wmem_queued), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_wmem_queued))) *)&(sk->sk_wmem_queued)); }) >> 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_stream_wspace(const struct sock *sk)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_421(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_sndbuf) == sizeof(char) || sizeof(sk->sk_sndbuf) == sizeof(short) || sizeof(sk->sk_sndbuf) == sizeof(int) || sizeof(sk->sk_sndbuf) == sizeof(long)) || sizeof(sk->sk_sndbuf) == sizeof(long long))) __compiletime_assert_421(); } while (0); (*(const volatile typeof( _Generic((sk->sk_sndbuf), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_sndbuf))) *)&(sk->sk_sndbuf)); }) - ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_422(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_wmem_queued) == sizeof(char) || sizeof(sk->sk_wmem_queued) == sizeof(short) || sizeof(sk->sk_wmem_queued) == sizeof(int) || sizeof(sk->sk_wmem_queued) == sizeof(long)) || sizeof(sk->sk_wmem_queued) == sizeof(long long))) __compiletime_assert_422(); } while (0); (*(const volatile typeof( _Generic((sk->sk_wmem_queued), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_wmem_queued))) *)&(sk->sk_wmem_queued)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_wmem_queued_add(struct sock *sk, int val)
{
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_423(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_wmem_queued) == sizeof(char) || sizeof(sk->sk_wmem_queued) == sizeof(short) || sizeof(sk->sk_wmem_queued) == sizeof(int) || sizeof(sk->sk_wmem_queued) == sizeof(long)) || sizeof(sk->sk_wmem_queued) == sizeof(long long))) __compiletime_assert_423(); } while (0); do { *(volatile typeof(sk->sk_wmem_queued) *)&(sk->sk_wmem_queued) = (sk->sk_wmem_queued + val); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_forward_alloc_add(struct sock *sk, int val)
{

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_424(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_forward_alloc) == sizeof(char) || sizeof(sk->sk_forward_alloc) == sizeof(short) || sizeof(sk->sk_forward_alloc) == sizeof(int) || sizeof(sk->sk_forward_alloc) == sizeof(long)) || sizeof(sk->sk_forward_alloc) == sizeof(long long))) __compiletime_assert_424(); } while (0); do { *(volatile typeof(sk->sk_forward_alloc) *)&(sk->sk_forward_alloc) = (sk->sk_forward_alloc + val); } while (0); } while (0);
}

void sk_stream_write_space(struct sock *sk);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
{

 skb_dst_force(skb);

 if (!sk->sk_backlog.tail)
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_425(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_backlog.head) == sizeof(char) || sizeof(sk->sk_backlog.head) == sizeof(short) || sizeof(sk->sk_backlog.head) == sizeof(int) || sizeof(sk->sk_backlog.head) == sizeof(long)) || sizeof(sk->sk_backlog.head) == sizeof(long long))) __compiletime_assert_425(); } while (0); do { *(volatile typeof(sk->sk_backlog.head) *)&(sk->sk_backlog.head) = (skb); } while (0); } while (0);
 else
  sk->sk_backlog.tail->next = skb;

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_426(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_backlog.tail) == sizeof(char) || sizeof(sk->sk_backlog.tail) == sizeof(short) || sizeof(sk->sk_backlog.tail) == sizeof(int) || sizeof(sk->sk_backlog.tail) == sizeof(long)) || sizeof(sk->sk_backlog.tail) == sizeof(long long))) __compiletime_assert_426(); } while (0); do { *(volatile typeof(sk->sk_backlog.tail) *)&(sk->sk_backlog.tail) = (skb); } while (0); } while (0);
 skb->next = ((void *)0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_rcvqueues_full(const struct sock *sk, unsigned int limit)
{
 unsigned int qsize = sk->sk_backlog.len + atomic_read(&sk->sk_backlog.rmem_alloc);

 return qsize > limit;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) int sk_add_backlog(struct sock *sk, struct sk_buff *skb,
           unsigned int limit)
{
 if (sk_rcvqueues_full(sk, limit))
  return -105;






 if (skb_pfmemalloc(skb) && !sock_flag(sk, SOCK_MEMALLOC))
  return -12;

 __sk_add_backlog(sk, skb);
 sk->sk_backlog.len += skb->truesize;
 return 0;
}

int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb);

                                                                                  ;
                                                                                  ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
{
 if (sk_memalloc_socks() && skb_pfmemalloc(skb))
  return __sk_backlog_rcv(sk, skb);

 return sk->sk_backlog_rcv(sk, skb);



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_incoming_cpu_update(struct sock *sk)
{
 int cpu = 0;

 if (__builtin_expect(!!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_427(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(char) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(short) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(int) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(long)) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(long long))) __compiletime_assert_427(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_incoming_cpu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_incoming_cpu))) *)&(sk->__sk_common.skc_incoming_cpu)); }) != cpu), 0))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_428(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(char) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(short) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(int) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(long)) || sizeof(sk->__sk_common.skc_incoming_cpu) == sizeof(long long))) __compiletime_assert_428(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_incoming_cpu) *)&(sk->__sk_common.skc_incoming_cpu) = (cpu); } while (0); } while (0);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_rps_save_rxhash(struct sock *sk,
     const struct sk_buff *skb)
{







}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_rps_reset_rxhash(struct sock *sk)
{




}
# 1161 "../include/net/sock.h"
int sk_stream_wait_connect(struct sock *sk, long *timeo_p);
int sk_stream_wait_memory(struct sock *sk, long *timeo_p);
void sk_stream_wait_close(struct sock *sk, long timeo_p);
int sk_stream_error(struct sock *sk, int flags, int err);
void sk_stream_kill_queues(struct sock *sk);
void sk_set_memalloc(struct sock *sk);
void sk_clear_memalloc(struct sock *sk);

void __sk_flush_backlog(struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_flush_backlog(struct sock *sk)
{
 if (__builtin_expect(!!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_429(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_backlog.tail) == sizeof(char) || sizeof(sk->sk_backlog.tail) == sizeof(short) || sizeof(sk->sk_backlog.tail) == sizeof(int) || sizeof(sk->sk_backlog.tail) == sizeof(long)) || sizeof(sk->sk_backlog.tail) == sizeof(long long))) __compiletime_assert_429(); } while (0); (*(const volatile typeof( _Generic((sk->sk_backlog.tail), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_backlog.tail))) *)&(sk->sk_backlog.tail)); })), 0)) {
  __sk_flush_backlog(sk);
  return true;
 }
 return false;
}

int sk_wait_data(struct sock *sk, long *timeo, const struct sk_buff *skb);

struct request_sock_ops;
struct timewait_sock_ops;
struct inet_hashinfo;
struct raw_hashinfo;
struct smc_hashinfo;
struct module;
struct sk_psock;





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_prot_clear_nulls(struct sock *sk, int size)
{
 if (__builtin_offsetof(struct sock, __sk_common.skc_node.next) != 0)
  memset(sk, 0, __builtin_offsetof(struct sock, __sk_common.skc_node.next));
 memset(&sk->__sk_common.skc_node.pprev, 0,
        size - __builtin_offsetof(struct sock, __sk_common.skc_node.pprev));
}

struct proto_accept_arg {
 int flags;
 int err;
 int is_empty;
 bool kern;
};




struct proto {
 void (*close)(struct sock *sk,
     long timeout);
 int (*pre_connect)(struct sock *sk,
     struct sockaddr *uaddr,
     int addr_len);
 int (*connect)(struct sock *sk,
     struct sockaddr *uaddr,
     int addr_len);
 int (*disconnect)(struct sock *sk, int flags);

 struct sock * (*accept)(struct sock *sk,
       struct proto_accept_arg *arg);

 int (*ioctl)(struct sock *sk, int cmd,
      int *karg);
 int (*init)(struct sock *sk);
 void (*destroy)(struct sock *sk);
 void (*shutdown)(struct sock *sk, int how);
 int (*setsockopt)(struct sock *sk, int level,
     int optname, sockptr_t optval,
     unsigned int optlen);
 int (*getsockopt)(struct sock *sk, int level,
     int optname, char *optval,
     int *option);
 void (*keepalive)(struct sock *sk, int valbool);




 int (*sendmsg)(struct sock *sk, struct msghdr *msg,
        size_t len);
 int (*recvmsg)(struct sock *sk, struct msghdr *msg,
        size_t len, int flags, int *addr_len);
 void (*splice_eof)(struct socket *sock);
 int (*bind)(struct sock *sk,
     struct sockaddr *addr, int addr_len);
 int (*bind_add)(struct sock *sk,
     struct sockaddr *addr, int addr_len);

 int (*backlog_rcv) (struct sock *sk,
      struct sk_buff *skb);
 bool (*bpf_bypass_getsockopt)(int level,
        int optname);

 void (*release_cb)(struct sock *sk);


 int (*hash)(struct sock *sk);
 void (*unhash)(struct sock *sk);
 void (*rehash)(struct sock *sk);
 int (*get_port)(struct sock *sk, unsigned short snum);
 void (*put_port)(struct sock *sk);

 int (*psock_update_sk_prot)(struct sock *sk,
       struct sk_psock *psock,
       bool restore);




 unsigned int inuse_idx;






 bool (*stream_memory_free)(const struct sock *sk, int wake);
 bool (*sock_is_readable)(struct sock *sk);

 void (*enter_memory_pressure)(struct sock *sk);
 void (*leave_memory_pressure)(struct sock *sk);
 atomic_long_t *memory_allocated;
 int *per_cpu_fw_alloc;
 struct percpu_counter *sockets_allocated;
# 1296 "../include/net/sock.h"
 unsigned long *memory_pressure;
 long *sysctl_mem;

 int *sysctl_wmem;
 int *sysctl_rmem;
 u32 sysctl_wmem_offset;
 u32 sysctl_rmem_offset;

 int max_header;
 bool no_autobind;

 struct kmem_cache *slab;
 unsigned int obj_size;
 unsigned int ipv6_pinfo_offset;
 slab_flags_t slab_flags;
 unsigned int useroffset;
 unsigned int usersize;

 unsigned int *orphan_count;

 struct request_sock_ops *rsk_prot;
 struct timewait_sock_ops *twsk_prot;

 union {
  struct inet_hashinfo *hashinfo;
  struct udp_table *udp_table;
  struct raw_hashinfo *raw_hash;
  struct smc_hashinfo *smc_hash;
 } h;

 struct module *owner;

 char name[32];

 struct list_head node;
 int (*diag_destroy)(struct sock *sk, int err);
} ;

int proto_register(struct proto *prot, int alloc_slab);
void proto_unregister(struct proto *prot);
int sock_load_diag_module(int family, int protocol);

                                                                                       ;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_forward_alloc_get(const struct sock *sk)
{




 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_430(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_forward_alloc) == sizeof(char) || sizeof(sk->sk_forward_alloc) == sizeof(short) || sizeof(sk->sk_forward_alloc) == sizeof(int) || sizeof(sk->sk_forward_alloc) == sizeof(long)) || sizeof(sk->sk_forward_alloc) == sizeof(long long))) __compiletime_assert_430(); } while (0); (*(const volatile typeof( _Generic((sk->sk_forward_alloc), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_forward_alloc))) *)&(sk->sk_forward_alloc)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __sk_stream_memory_free(const struct sock *sk, int wake)
{
 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_431(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_wmem_queued) == sizeof(char) || sizeof(sk->sk_wmem_queued) == sizeof(short) || sizeof(sk->sk_wmem_queued) == sizeof(int) || sizeof(sk->sk_wmem_queued) == sizeof(long)) || sizeof(sk->sk_wmem_queued) == sizeof(long long))) __compiletime_assert_431(); } while (0); (*(const volatile typeof( _Generic((sk->sk_wmem_queued), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_wmem_queued))) *)&(sk->sk_wmem_queued)); }) >= ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_432(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_sndbuf) == sizeof(char) || sizeof(sk->sk_sndbuf) == sizeof(short) || sizeof(sk->sk_sndbuf) == sizeof(int) || sizeof(sk->sk_sndbuf) == sizeof(long)) || sizeof(sk->sk_sndbuf) == sizeof(long long))) __compiletime_assert_432(); } while (0); (*(const volatile typeof( _Generic((sk->sk_sndbuf), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_sndbuf))) *)&(sk->sk_sndbuf)); }))
  return false;

 return sk->__sk_common.skc_prot->stream_memory_free ?
  sk->__sk_common.skc_prot->stream_memory_free(sk, wake) : true;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_stream_memory_free(const struct sock *sk)
{
 return __sk_stream_memory_free(sk, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __sk_stream_is_writeable(const struct sock *sk, int wake)
{
 return sk_stream_wspace(sk) >= sk_stream_min_wspace(sk) &&
        __sk_stream_memory_free(sk, wake);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_stream_is_writeable(const struct sock *sk)
{
 return __sk_stream_is_writeable(sk, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_under_cgroup_hierarchy(struct sock *sk,
         struct cgroup *ancestor)
{




 return -524;

}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_sockets_allocated_dec(struct sock *sk)
{
 percpu_counter_add_batch(sk->__sk_common.skc_prot->sockets_allocated, -1,
     16);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_sockets_allocated_inc(struct sock *sk)
{
 percpu_counter_add_batch(sk->__sk_common.skc_prot->sockets_allocated, 1,
     16);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64
sk_sockets_allocated_read_positive(struct sock *sk)
{
 return percpu_counter_read_positive(sk->__sk_common.skc_prot->sockets_allocated);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
proto_sockets_allocated_sum_positive(struct proto *prot)
{
 return percpu_counter_sum_positive(prot->sockets_allocated);
}



struct prot_inuse {
 int all;
 int val[64];
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_prot_inuse_add(const struct net *net,
           const struct proto *prot, int val)
{
 do { do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->val[prot->inuse_idx])) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(net->core.prot_inuse->val[prot->inuse_idx])) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->val[prot->inuse_idx])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->val[prot->inuse_idx]))) *)(&(net->core.prot_inuse->val[prot->inuse_idx])); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->val[prot->inuse_idx])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->val[prot->inuse_idx]))) *)(&(net->core.prot_inuse->val[prot->inuse_idx])); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->val[prot->inuse_idx])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->val[prot->inuse_idx]))) *)(&(net->core.prot_inuse->val[prot->inuse_idx])); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->val[prot->inuse_idx])) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->val[prot->inuse_idx]))) *)(&(net->core.prot_inuse->val[prot->inuse_idx])); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_inuse_add(const struct net *net, int val)
{
 do { do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->all)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(net->core.prot_inuse->all)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->all)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->all))) *)(&(net->core.prot_inuse->all)); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->all)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->all))) *)(&(net->core.prot_inuse->all)); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->all)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->all))) *)(&(net->core.prot_inuse->all)); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(net->core.prot_inuse->all)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(net->core.prot_inuse->all))) *)(&(net->core.prot_inuse->all)); }); }) += val; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

int sock_prot_inuse_get(struct net *net, struct proto *proto);
int sock_inuse_get(struct net *net);
# 1447 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __sk_prot_rehash(struct sock *sk)
{
 sk->__sk_common.skc_prot->unhash(sk);
 return sk->__sk_common.skc_prot->hash(sk);
}
# 1466 "../include/net/sock.h"
struct socket_alloc {
 struct socket socket;
 struct inode vfs_inode;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct socket *SOCKET_I(struct inode *inode)
{
 return &({ void *__mptr = (void *)(inode); _Static_assert(__builtin_types_compatible_p(typeof(*(inode)), typeof(((struct socket_alloc *)0)->vfs_inode)) || __builtin_types_compatible_p(typeof(*(inode)), typeof(void)), "pointer type mismatch in container_of()"); ((struct socket_alloc *)(__mptr - __builtin_offsetof(struct socket_alloc, vfs_inode))); })->socket;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inode *SOCK_INODE(struct socket *socket)
{
 return &({ void *__mptr = (void *)(socket); _Static_assert(__builtin_types_compatible_p(typeof(*(socket)), typeof(((struct socket_alloc *)0)->socket)) || __builtin_types_compatible_p(typeof(*(socket)), typeof(void)), "pointer type mismatch in container_of()"); ((struct socket_alloc *)(__mptr - __builtin_offsetof(struct socket_alloc, socket))); })->vfs_inode;
}




int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind);
int __sk_mem_schedule(struct sock *sk, int size, int kind);
void __sk_mem_reduce_allocated(struct sock *sk, int amount);
void __sk_mem_reclaim(struct sock *sk, int amount);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long sk_prot_mem_limits(const struct sock *sk, int index)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_433(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_prot->sysctl_mem[index]) == sizeof(char) || sizeof(sk->__sk_common.skc_prot->sysctl_mem[index]) == sizeof(short) || sizeof(sk->__sk_common.skc_prot->sysctl_mem[index]) == sizeof(int) || sizeof(sk->__sk_common.skc_prot->sysctl_mem[index]) == sizeof(long)) || sizeof(sk->__sk_common.skc_prot->sysctl_mem[index]) == sizeof(long long))) __compiletime_assert_433(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_prot->sysctl_mem[index]), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_prot->sysctl_mem[index]))) *)&(sk->__sk_common.skc_prot->sysctl_mem[index])); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_mem_pages(int amt)
{
 return (amt + (1UL << 14) - 1) >> 14;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_has_account(struct sock *sk)
{

 return !!sk->__sk_common.skc_prot->memory_allocated;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_wmem_schedule(struct sock *sk, int size)
{
 int delta;

 if (!sk_has_account(sk))
  return true;
 delta = size - sk->sk_forward_alloc;
 return delta <= 0 || __sk_mem_schedule(sk, delta, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
{
 int delta;

 if (!sk_has_account(sk))
  return true;
 delta = size - sk->sk_forward_alloc;
 return delta <= 0 || __sk_mem_schedule(sk, delta, 1) ||
  skb_pfmemalloc(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_unused_reserved_mem(const struct sock *sk)
{
 int unused_mem;

 if (__builtin_expect(!!(!sk->sk_reserved_mem), 1))
  return 0;

 unused_mem = sk->sk_reserved_mem - sk->sk_wmem_queued -
   atomic_read(&sk->sk_backlog.rmem_alloc);

 return unused_mem > 0 ? unused_mem : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_mem_reclaim(struct sock *sk)
{
 int reclaimable;

 if (!sk_has_account(sk))
  return;

 reclaimable = sk->sk_forward_alloc - sk_unused_reserved_mem(sk);

 if (reclaimable >= (int)(1UL << 14))
  __sk_mem_reclaim(sk, reclaimable);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_mem_reclaim_final(struct sock *sk)
{
 sk->sk_reserved_mem = 0;
 sk_mem_reclaim(sk);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_mem_charge(struct sock *sk, int size)
{
 if (!sk_has_account(sk))
  return;
 sk_forward_alloc_add(sk, -size);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_mem_uncharge(struct sock *sk, int size)
{
 if (!sk_has_account(sk))
  return;
 sk_forward_alloc_add(sk, size);
 sk_mem_reclaim(sk);
}
# 1597 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lockdep_sock_is_held(const struct sock *sk)
{
 return lock_is_held(&(&sk->sk_lock)->dep_map) ||
        lock_is_held(&(&sk->sk_lock.slock)->dep_map);
}

void lock_sock_nested(struct sock *sk, int subclass);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lock_sock(struct sock *sk)
{
 lock_sock_nested(sk, 0);
}

void __lock_sock(struct sock *sk);
void __release_sock(struct sock *sk);
void release_sock(struct sock *sk);
# 1621 "../include/net/sock.h"
bool __lock_sock_fast(struct sock *sk) ;
# 1636 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lock_sock_fast(struct sock *sk)
{

 lock_acquire(&sk->sk_lock.dep_map, 0, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));

 return __lock_sock_fast(sk);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lock_sock_fast_nested(struct sock *sk)
{
 lock_acquire(&sk->sk_lock.dep_map, 1, 0, 0, 1, ((void *)0), (unsigned long)__builtin_return_address(0));

 return __lock_sock_fast(sk);
}
# 1660 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void unlock_sock_fast(struct sock *sk, bool slow)

{
 if (slow) {
  release_sock(sk);
  (void)0;
 } else {
  lock_release(&sk->sk_lock.dep_map, (unsigned long)__builtin_return_address(0));
  spin_unlock_bh(&sk->sk_lock.slock);
 }
}

void sockopt_lock_sock(struct sock *sk);
void sockopt_release_sock(struct sock *sk);
bool sockopt_ns_capable(struct user_namespace *ns, int cap);
bool sockopt_capable(int cap);
# 1691 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_owned_by_me(const struct sock *sk)
{

 ({ bool __ret_do_once = !!(!lockdep_sock_is_held(sk) && debug_locks); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 1694, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_not_owned_by_me(const struct sock *sk)
{

 ({ bool __ret_do_once = !!(lockdep_sock_is_held(sk) && debug_locks); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 1701, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sock_owned_by_user(const struct sock *sk)
{
 sock_owned_by_me(sk);
 return sk->sk_lock.owned;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sock_owned_by_user_nocheck(const struct sock *sk)
{
 return sk->sk_lock.owned;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_release_ownership(struct sock *sk)
{
 (void)({ bool __ret_do_once = !!(!sock_owned_by_user_nocheck(sk)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 1718, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 sk->sk_lock.owned = 0;


 lock_release(&sk->sk_lock.dep_map, (unsigned long)__builtin_return_address(0));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sock_allow_reclassification(const struct sock *csk)
{
 struct sock *sk = (struct sock *)csk;

 return !sock_owned_by_user_nocheck(sk) &&
  !spin_is_locked(&sk->sk_lock.slock);
}

struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
        struct proto *prot, int kern);
void sk_free(struct sock *sk);
void sk_destruct(struct sock *sk);
struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority);
void sk_free_unlock_clone(struct sock *sk);

struct sk_buff *sock_wmalloc(struct sock *sk, unsigned long size, int force,
        gfp_t priority);
void __sock_wfree(struct sk_buff *skb);
void sock_wfree(struct sk_buff *skb);
struct sk_buff *sock_omalloc(struct sock *sk, unsigned long size,
        gfp_t priority);
void skb_orphan_partial(struct sk_buff *skb);
void sock_rfree(struct sk_buff *skb);
void sock_efree(struct sk_buff *skb);

void sock_edemux(struct sk_buff *skb);
void sock_pfree(struct sk_buff *skb);




int sk_setsockopt(struct sock *sk, int level, int optname,
    sockptr_t optval, unsigned int optlen);
int sock_setsockopt(struct socket *sock, int level, int op,
      sockptr_t optval, unsigned int optlen);
int do_sock_setsockopt(struct socket *sock, bool compat, int level,
         int optname, sockptr_t optval, int optlen);
int do_sock_getsockopt(struct socket *sock, bool compat, int level,
         int optname, sockptr_t optval, sockptr_t optlen);

int sk_getsockopt(struct sock *sk, int level, int optname,
    sockptr_t optval, sockptr_t optlen);
int sock_gettstamp(struct socket *sock, void *userstamp,
     bool timeval, bool time32);
struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
         unsigned long data_len, int noblock,
         int *errcode, int max_page_order);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *sock_alloc_send_skb(struct sock *sk,
        unsigned long size,
        int noblock, int *errcode)
{
 return sock_alloc_send_pskb(sk, size, 0, noblock, errcode, 0);
}

void *sock_kmalloc(struct sock *sk, int size, gfp_t priority);
void sock_kfree_s(struct sock *sk, void *mem, int size);
void sock_kzfree_s(struct sock *sk, void *mem, int size);
void sk_send_sigurg(struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_replace_proto(struct sock *sk, struct proto *proto)
{
 if (sk->sk_socket)
  clear_bit(5, &sk->sk_socket->flags);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_434(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_prot) == sizeof(char) || sizeof(sk->__sk_common.skc_prot) == sizeof(short) || sizeof(sk->__sk_common.skc_prot) == sizeof(int) || sizeof(sk->__sk_common.skc_prot) == sizeof(long)) || sizeof(sk->__sk_common.skc_prot) == sizeof(long long))) __compiletime_assert_434(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_prot) *)&(sk->__sk_common.skc_prot) = (proto); } while (0); } while (0);
}

struct sockcm_cookie {
 u64 transmit_time;
 u32 mark;
 u32 tsflags;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sockcm_init(struct sockcm_cookie *sockc,
          const struct sock *sk)
{
 *sockc = (struct sockcm_cookie) {
  .tsflags = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_435(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_tsflags) == sizeof(char) || sizeof(sk->sk_tsflags) == sizeof(short) || sizeof(sk->sk_tsflags) == sizeof(int) || sizeof(sk->sk_tsflags) == sizeof(long)) || sizeof(sk->sk_tsflags) == sizeof(long long))) __compiletime_assert_435(); } while (0); (*(const volatile typeof( _Generic((sk->sk_tsflags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_tsflags))) *)&(sk->sk_tsflags)); })
 };
}

int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
       struct sockcm_cookie *sockc);
int sock_cmsg_send(struct sock *sk, struct msghdr *msg,
     struct sockcm_cookie *sockc);





int sock_no_bind(struct socket *, struct sockaddr *, int);
int sock_no_connect(struct socket *, struct sockaddr *, int, int);
int sock_no_socketpair(struct socket *, struct socket *);
int sock_no_accept(struct socket *, struct socket *, struct proto_accept_arg *);
int sock_no_getname(struct socket *, struct sockaddr *, int);
int sock_no_ioctl(struct socket *, unsigned int, unsigned long);
int sock_no_listen(struct socket *, int);
int sock_no_shutdown(struct socket *, int);
int sock_no_sendmsg(struct socket *, struct msghdr *, size_t);
int sock_no_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t len);
int sock_no_recvmsg(struct socket *, struct msghdr *, size_t, int);
int sock_no_mmap(struct file *file, struct socket *sock,
   struct vm_area_struct *vma);





int sock_common_getsockopt(struct socket *sock, int level, int optname,
      char *optval, int *optlen);
int sock_common_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
   int flags);
int sock_common_setsockopt(struct socket *sock, int level, int optname,
      sockptr_t optval, unsigned int optlen);

void sk_common_release(struct sock *sk);






void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid);




void sock_init_data(struct socket *sock, struct sock *sk);
# 1881 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_put(struct sock *sk)
{
 if (refcount_dec_and_test(&sk->__sk_common.skc_refcnt))
  sk_free(sk);
}



void sock_gen_put(struct sock *sk);

int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested,
       unsigned int trim_cap, bool refcounted);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_receive_skb(struct sock *sk, struct sk_buff *skb,
     const int nested)
{
 return __sk_receive_skb(sk, skb, nested, 1, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_tx_queue_set(struct sock *sk, int tx_queue)
{

 if (({ bool __ret_do_once = !!((unsigned short)tx_queue >= ((unsigned short)~0U)); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 1902, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); }))
  return;



 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_436(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long long))) __compiletime_assert_436(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_tx_queue_mapping) *)&(sk->__sk_common.skc_tx_queue_mapping) = (tx_queue); } while (0); } while (0);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_tx_queue_clear(struct sock *sk)
{



 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_437(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long long))) __compiletime_assert_437(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_tx_queue_mapping) *)&(sk->__sk_common.skc_tx_queue_mapping) = (((unsigned short)~0U)); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_tx_queue_get(const struct sock *sk)
{
 if (sk) {



  int val = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_438(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_tx_queue_mapping) == sizeof(long long))) __compiletime_assert_438(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_tx_queue_mapping), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_tx_queue_mapping))) *)&(sk->__sk_common.skc_tx_queue_mapping)); });

  if (val != ((unsigned short)~0U))
   return val;
 }
 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __sk_rx_queue_set(struct sock *sk,
         const struct sk_buff *skb,
         bool force_set)
{

 if (skb_rx_queue_recorded(skb)) {
  u16 rx_queue = skb_get_rx_queue(skb);

  if (force_set ||
      __builtin_expect(!!(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_439(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long long))) __compiletime_assert_439(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_rx_queue_mapping), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_rx_queue_mapping))) *)&(sk->__sk_common.skc_rx_queue_mapping)); }) != rx_queue), 0))
   do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_440(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long long))) __compiletime_assert_440(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_rx_queue_mapping) *)&(sk->__sk_common.skc_rx_queue_mapping) = (rx_queue); } while (0); } while (0);
 }

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)
{
 __sk_rx_queue_set(sk, skb, true);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_rx_queue_update(struct sock *sk, const struct sk_buff *skb)
{
 __sk_rx_queue_set(sk, skb, false);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_rx_queue_clear(struct sock *sk)
{

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_441(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long long))) __compiletime_assert_441(); } while (0); do { *(volatile typeof(sk->__sk_common.skc_rx_queue_mapping) *)&(sk->__sk_common.skc_rx_queue_mapping) = (((unsigned short)~0U)); } while (0); } while (0);

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_rx_queue_get(const struct sock *sk)
{

 if (sk) {
  int res = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_442(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(char) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(short) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(int) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long)) || sizeof(sk->__sk_common.skc_rx_queue_mapping) == sizeof(long long))) __compiletime_assert_442(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_rx_queue_mapping), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_rx_queue_mapping))) *)&(sk->__sk_common.skc_rx_queue_mapping)); });

  if (res != ((unsigned short)~0U))
   return res;
 }


 return -1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_set_socket(struct sock *sk, struct socket *sock)
{
 sk->sk_socket = sock;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) wait_queue_head_t *sk_sleep(struct sock *sk)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_443(void) __attribute__((__error__("BUILD_BUG_ON failed: " "offsetof(struct socket_wq, wait) != 0"))); if (!(!(__builtin_offsetof(struct socket_wq, wait) != 0))) __compiletime_assert_443(); } while (0);
 return &({ typeof(sk->sk_wq) __UNIQUE_ID_rcu444 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_445(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_wq) == sizeof(char) || sizeof(sk->sk_wq) == sizeof(short) || sizeof(sk->sk_wq) == sizeof(int) || sizeof(sk->sk_wq) == sizeof(long)) || sizeof(sk->sk_wq) == sizeof(long long))) __compiletime_assert_445(); } while (0); (*(const volatile typeof( _Generic((sk->sk_wq), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_wq))) *)&(sk->sk_wq)); }); ((typeof(*sk->sk_wq) *)(__UNIQUE_ID_rcu444)); })->wait;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_orphan(struct sock *sk)
{
 _raw_write_lock_bh(&sk->sk_callback_lock);
 sock_set_flag(sk, SOCK_DEAD);
 sk_set_socket(sk, ((void *)0));
 sk->sk_wq = ((void *)0);
 _raw_write_unlock_bh(&sk->sk_callback_lock);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_graft(struct sock *sk, struct socket *parent)
{
 ({ int __ret_warn_on = !!(parent->sk); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/sock.h", 2008, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 _raw_write_lock_bh(&sk->sk_callback_lock);
 do { uintptr_t _r_a_p__v = (uintptr_t)(&parent->wq); ; if (__builtin_constant_p(&parent->wq) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_446(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_wq)) == sizeof(char) || sizeof((sk->sk_wq)) == sizeof(short) || sizeof((sk->sk_wq)) == sizeof(int) || sizeof((sk->sk_wq)) == sizeof(long)) || sizeof((sk->sk_wq)) == sizeof(long long))) __compiletime_assert_446(); } while (0); do { *(volatile typeof((sk->sk_wq)) *)&((sk->sk_wq)) = ((typeof(sk->sk_wq))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_447(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&sk->sk_wq) == sizeof(char) || sizeof(*&sk->sk_wq) == sizeof(short) || sizeof(*&sk->sk_wq) == sizeof(int) || sizeof(*&sk->sk_wq) == sizeof(long)) || sizeof(*&sk->sk_wq) == sizeof(long long))) __compiletime_assert_447(); } while (0); do { *(volatile typeof(*&sk->sk_wq) *)&(*&sk->sk_wq) = ((typeof(*((typeof(sk->sk_wq))_r_a_p__v)) *)((typeof(sk->sk_wq))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 parent->sk = sk;
 sk_set_socket(sk, parent);
 sk->sk_uid = SOCK_INODE(parent)->i_uid;
 security_sock_graft(sk, parent);
 _raw_write_unlock_bh(&sk->sk_callback_lock);
}

kuid_t sock_i_uid(struct sock *sk);
unsigned long __sock_i_ino(struct sock *sk);
unsigned long sock_i_ino(struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
{
 return sk ? sk->sk_uid : make_kuid(net->user_ns, 0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 net_tx_rndhash(void)
{
 u32 v = get_random_u32();

 return v ?: 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_set_txhash(struct sock *sk)
{

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_448(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_txhash) == sizeof(char) || sizeof(sk->sk_txhash) == sizeof(short) || sizeof(sk->sk_txhash) == sizeof(int) || sizeof(sk->sk_txhash) == sizeof(long)) || sizeof(sk->sk_txhash) == sizeof(long long))) __compiletime_assert_448(); } while (0); do { *(volatile typeof(sk->sk_txhash) *)&(sk->sk_txhash) = (net_tx_rndhash()); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_rethink_txhash(struct sock *sk)
{
 if (sk->sk_txhash && sk->sk_txrehash == 1) {
  sk_set_txhash(sk);
  return true;
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dst_entry *
__sk_dst_get(const struct sock *sk)
{
 return ({ typeof(*(sk->sk_dst_cache)) *__UNIQUE_ID_rcu449 = (typeof(*(sk->sk_dst_cache)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_450(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_dst_cache)) == sizeof(char) || sizeof((sk->sk_dst_cache)) == sizeof(short) || sizeof((sk->sk_dst_cache)) == sizeof(int) || sizeof((sk->sk_dst_cache)) == sizeof(long)) || sizeof((sk->sk_dst_cache)) == sizeof(long long))) __compiletime_assert_450(); } while (0); (*(const volatile typeof( _Generic(((sk->sk_dst_cache)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((sk->sk_dst_cache)))) *)&((sk->sk_dst_cache))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lockdep_sock_is_held(sk)) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 2053, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(sk->sk_dst_cache)) *)(__UNIQUE_ID_rcu449)); });

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct dst_entry *
sk_dst_get(const struct sock *sk)
{
 struct dst_entry *dst;

 rcu_read_lock();
 dst = ({ typeof(*(sk->sk_dst_cache)) *__UNIQUE_ID_rcu451 = (typeof(*(sk->sk_dst_cache)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_452(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_dst_cache)) == sizeof(char) || sizeof((sk->sk_dst_cache)) == sizeof(short) || sizeof((sk->sk_dst_cache)) == sizeof(int) || sizeof((sk->sk_dst_cache)) == sizeof(long)) || sizeof((sk->sk_dst_cache)) == sizeof(long long))) __compiletime_assert_452(); } while (0); (*(const volatile typeof( _Generic(((sk->sk_dst_cache)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((sk->sk_dst_cache)))) *)&((sk->sk_dst_cache))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 2062, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(sk->sk_dst_cache)) *)(__UNIQUE_ID_rcu451)); });
 if (dst && !rcuref_get(&dst->__rcuref))
  dst = ((void *)0);
 rcu_read_unlock();
 return dst;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __dst_negative_advice(struct sock *sk)
{
 struct dst_entry *dst = __sk_dst_get(sk);

 if (dst && dst->ops->negative_advice)
  dst->ops->negative_advice(sk, dst);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void dst_negative_advice(struct sock *sk)
{
 sk_rethink_txhash(sk);
 __dst_negative_advice(sk);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__sk_dst_set(struct sock *sk, struct dst_entry *dst)
{
 struct dst_entry *old_dst;

 sk_tx_queue_clear(sk);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_453(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_453(); } while (0); do { *(volatile typeof(sk->sk_dst_pending_confirm) *)&(sk->sk_dst_pending_confirm) = (0); } while (0); } while (0);
 old_dst = ({ do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lockdep_sock_is_held(sk)))) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 2091, "suspicious rcu_dereference_protected() usage"); } } while (0); ; ((typeof(*(sk->sk_dst_cache)) *)((sk->sk_dst_cache))); });

 do { uintptr_t _r_a_p__v = (uintptr_t)(dst); ; if (__builtin_constant_p(dst) && (_r_a_p__v) == (uintptr_t)((void *)0)) do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_454(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_dst_cache)) == sizeof(char) || sizeof((sk->sk_dst_cache)) == sizeof(short) || sizeof((sk->sk_dst_cache)) == sizeof(int) || sizeof((sk->sk_dst_cache)) == sizeof(long)) || sizeof((sk->sk_dst_cache)) == sizeof(long long))) __compiletime_assert_454(); } while (0); do { *(volatile typeof((sk->sk_dst_cache)) *)&((sk->sk_dst_cache)) = ((typeof(sk->sk_dst_cache))(_r_a_p__v)); } while (0); } while (0); else do { __asm__ __volatile__("": : :"memory"); do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_455(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&sk->sk_dst_cache) == sizeof(char) || sizeof(*&sk->sk_dst_cache) == sizeof(short) || sizeof(*&sk->sk_dst_cache) == sizeof(int) || sizeof(*&sk->sk_dst_cache) == sizeof(long)) || sizeof(*&sk->sk_dst_cache) == sizeof(long long))) __compiletime_assert_455(); } while (0); do { *(volatile typeof(*&sk->sk_dst_cache) *)&(*&sk->sk_dst_cache) = ((typeof(*((typeof(sk->sk_dst_cache))_r_a_p__v)) *)((typeof(sk->sk_dst_cache))_r_a_p__v)); } while (0); } while (0); } while (0); } while (0);
 dst_release(old_dst);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
sk_dst_set(struct sock *sk, struct dst_entry *dst)
{
 struct dst_entry *old_dst;

 sk_tx_queue_clear(sk);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_456(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_456(); } while (0); do { *(volatile typeof(sk->sk_dst_pending_confirm) *)&(sk->sk_dst_pending_confirm) = (0); } while (0); } while (0);
 old_dst = ({ typeof(*({ typeof(&sk->sk_dst_cache) __ai_ptr = (&sk->sk_dst_cache); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)((typeof(*(dst)) *)(dst)), (__ai_ptr), sizeof(*(__ai_ptr)))); })) *__UNIQUE_ID_rcu457 = (typeof(*({ typeof(&sk->sk_dst_cache) __ai_ptr = (&sk->sk_dst_cache); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)((typeof(*(dst)) *)(dst)), (__ai_ptr), sizeof(*(__ai_ptr)))); })) *)(({ typeof(&sk->sk_dst_cache) __ai_ptr = (&sk->sk_dst_cache); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)((typeof(*(dst)) *)(dst)), (__ai_ptr), sizeof(*(__ai_ptr)))); })); ; ((typeof(*({ typeof(&sk->sk_dst_cache) __ai_ptr = (&sk->sk_dst_cache); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)((typeof(*(dst)) *)(dst)), (__ai_ptr), sizeof(*(__ai_ptr)))); })) *)(__UNIQUE_ID_rcu457)); });
 dst_release(old_dst);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
__sk_dst_reset(struct sock *sk)
{
 __sk_dst_set(sk, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
sk_dst_reset(struct sock *sk)
{
 sk_dst_set(sk, ((void *)0));
}

struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie);

struct dst_entry *sk_dst_check(struct sock *sk, u32 cookie);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_dst_confirm(struct sock *sk)
{
 if (!({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_458(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_458(); } while (0); (*(const volatile typeof( _Generic((sk->sk_dst_pending_confirm), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_dst_pending_confirm))) *)&(sk->sk_dst_pending_confirm)); }))
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_459(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_459(); } while (0); do { *(volatile typeof(sk->sk_dst_pending_confirm) *)&(sk->sk_dst_pending_confirm) = (1); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_confirm_neigh(struct sk_buff *skb, struct neighbour *n)
{
 if (skb_get_dst_pending_confirm(skb)) {
  struct sock *sk = skb->sk;

  if (sk && ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_460(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_460(); } while (0); (*(const volatile typeof( _Generic((sk->sk_dst_pending_confirm), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_dst_pending_confirm))) *)&(sk->sk_dst_pending_confirm)); }))
   do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_461(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_dst_pending_confirm) == sizeof(char) || sizeof(sk->sk_dst_pending_confirm) == sizeof(short) || sizeof(sk->sk_dst_pending_confirm) == sizeof(int) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long)) || sizeof(sk->sk_dst_pending_confirm) == sizeof(long long))) __compiletime_assert_461(); } while (0); do { *(volatile typeof(sk->sk_dst_pending_confirm) *)&(sk->sk_dst_pending_confirm) = (0); } while (0); } while (0);
  neigh_confirm(n);
 }
}

bool sk_mc_loop(const struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_can_gso(const struct sock *sk)
{
 return net_gso_ok(sk->sk_route_caps, sk->sk_gso_type);
}

void sk_setup_caps(struct sock *sk, struct dst_entry *dst);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_gso_disable(struct sock *sk)
{
 sk->sk_gso_disabled = 1;
 sk->sk_route_caps &= ~(((netdev_features_t)1 << (NETIF_F_GSO_LAST + 1)) - ((netdev_features_t)1 << (NETIF_F_GSO_SHIFT)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_do_copy_data_nocache(struct sock *sk, struct sk_buff *skb,
        struct iov_iter *from, char *to,
        int copy, int offset)
{
 if (skb->ip_summed == 0) {
  __wsum csum = 0;
  if (!csum_and_copy_from_iter_full(to, copy, &csum, from))
   return -14;
  skb->csum = csum_block_add(skb->csum, csum, offset);
 } else if (sk->sk_route_caps & ((netdev_features_t)1 << (NETIF_F_NOCACHE_COPY_BIT))) {
  if (!copy_from_iter_full_nocache(to, copy, from))
   return -14;
 } else if (!copy_from_iter_full(to, copy, from))
  return -14;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_add_data_nocache(struct sock *sk, struct sk_buff *skb,
           struct iov_iter *from, int copy)
{
 int err, offset = skb->len;

 err = skb_do_copy_data_nocache(sk, skb, from, skb_put(skb, copy),
           copy, offset);
 if (err)
  __skb_trim(skb, offset);

 return err;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_copy_to_page_nocache(struct sock *sk, struct iov_iter *from,
        struct sk_buff *skb,
        struct page *page,
        int off, int copy)
{
 int err;

 err = skb_do_copy_data_nocache(sk, skb, from, lowmem_page_address(page) + off,
           copy, skb->len);
 if (err)
  return err;

 skb_len_add(skb, copy);
 sk_wmem_queued_add(sk, copy);
 sk_mem_charge(sk, copy);
 return 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_wmem_alloc_get(const struct sock *sk)
{
 return refcount_read(&sk->sk_wmem_alloc) - 1;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_rmem_alloc_get(const struct sock *sk)
{
 return atomic_read(&sk->sk_backlog.rmem_alloc);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_has_allocations(const struct sock *sk)
{
 return sk_wmem_alloc_get(sk) || sk_rmem_alloc_get(sk);
}
# 2268 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool skwq_has_sleeper(struct socket_wq *wq)
{
 return wq && wq_has_sleeper(&wq->wait);
}
# 2281 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_poll_wait(struct file *filp, struct socket *sock,
      poll_table *p)
{
 if (!poll_does_not_wait(p)) {
  poll_wait(filp, &sock->wq.wait, p);





  __asm__ __volatile__("": : :"memory");
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_hash_from_sk(struct sk_buff *skb, struct sock *sk)
{

 u32 txhash = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_462(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_txhash) == sizeof(char) || sizeof(sk->sk_txhash) == sizeof(short) || sizeof(sk->sk_txhash) == sizeof(int) || sizeof(sk->sk_txhash) == sizeof(long)) || sizeof(sk->sk_txhash) == sizeof(long long))) __compiletime_assert_462(); } while (0); (*(const volatile typeof( _Generic((sk->sk_txhash), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_txhash))) *)&(sk->sk_txhash)); });

 if (txhash) {
  skb->l4_hash = 1;
  skb->hash = txhash;
 }
}

void skb_set_owner_w(struct sk_buff *skb, struct sock *sk);
# 2316 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
{
 skb_orphan(skb);
 skb->sk = sk;
 skb->destructor = sock_rfree;
 atomic_add(skb->truesize, &sk->sk_backlog.rmem_alloc);
 sk_mem_charge(sk, skb->truesize);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__warn_unused_result__)) bool skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
{
 if (sk && refcount_inc_not_zero(&sk->__sk_common.skc_refcnt)) {
  skb_orphan(skb);
  skb->destructor = sock_efree;
  skb->sk = sk;
  return true;
 }
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *skb_clone_and_charge_r(struct sk_buff *skb, struct sock *sk)
{
 skb = skb_clone(skb, sk_gfp_mask(sk, ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT))))));
 if (skb) {
  if (sk_rmem_schedule(sk, skb, skb->truesize)) {
   skb_set_owner_r(skb, sk);
   return skb;
  }
  __kfree_skb(skb);
 }
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_prepare_for_gro(struct sk_buff *skb)
{
 if (skb->destructor != sock_wfree) {
  skb_orphan(skb);
  return;
 }
 skb->slow_gro = 1;
}

void sk_reset_timer(struct sock *sk, struct timer_list *timer,
      unsigned long expires);

void sk_stop_timer(struct sock *sk, struct timer_list *timer);

void sk_stop_timer_sync(struct sock *sk, struct timer_list *timer);

int __sk_queue_drop_skb(struct sock *sk, struct sk_buff_head *sk_queue,
   struct sk_buff *skb, unsigned int flags,
   void (*destructor)(struct sock *sk,
        struct sk_buff *skb));
int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);

int sock_queue_rcv_skb_reason(struct sock *sk, struct sk_buff *skb,
         enum skb_drop_reason *reason);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
{
 return sock_queue_rcv_skb_reason(sk, skb, ((void *)0));
}

int sock_queue_err_skb(struct sock *sk, struct sk_buff *skb);
struct sk_buff *sock_dequeue_err_skb(struct sock *sk);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sock_error(struct sock *sk)
{
 int err;




 if (__builtin_expect(!!(({ __kcsan_disable_current(); __auto_type __v = (!sk->sk_err); __kcsan_enable_current(); __v; })), 1))
  return 0;

 err = ({ typeof(&sk->sk_err) __ai_ptr = (&sk->sk_err); do { } while (0); instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); ((__typeof__(*(__ai_ptr)))__arch_xchg((unsigned long)(0), (__ai_ptr), sizeof(*(__ai_ptr)))); });
 return -err;
}

void sk_error_report(struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long sock_wspace(struct sock *sk)
{
 int amt = 0;

 if (!(sk->sk_shutdown & 2)) {
  amt = sk->sk_sndbuf - refcount_read(&sk->sk_wmem_alloc);
  if (amt < 0)
   amt = 0;
 }
 return amt;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_set_bit(int nr, struct sock *sk)
{
 if ((nr == 0 || nr == 1) &&
     !sock_flag(sk, SOCK_FASYNC))
  return;

 set_bit(nr, &sk->sk_wq_raw->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_clear_bit(int nr, struct sock *sk)
{
 if ((nr == 0 || nr == 1) &&
     !sock_flag(sk, SOCK_FASYNC))
  return;

 clear_bit(nr, &sk->sk_wq_raw->flags);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_wake_async(const struct sock *sk, int how, int band)
{
 if (sock_flag(sk, SOCK_FASYNC)) {
  rcu_read_lock();
  sock_wake_async(({ typeof(*(sk->sk_wq)) *__UNIQUE_ID_rcu463 = (typeof(*(sk->sk_wq)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_464(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_wq)) == sizeof(char) || sizeof((sk->sk_wq)) == sizeof(short) || sizeof((sk->sk_wq)) == sizeof(int) || sizeof((sk->sk_wq)) == sizeof(long)) || sizeof((sk->sk_wq)) == sizeof(long long))) __compiletime_assert_464(); } while (0); (*(const volatile typeof( _Generic(((sk->sk_wq)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((sk->sk_wq)))) *)&((sk->sk_wq))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 2440, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(sk->sk_wq)) *)(__UNIQUE_ID_rcu463)); }), how, band);
  rcu_read_unlock();
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_wake_async_rcu(const struct sock *sk, int how, int band)
{
 if (__builtin_expect(!!(sock_flag(sk, SOCK_FASYNC)), 0))
  sock_wake_async(({ typeof(*(sk->sk_wq)) *__UNIQUE_ID_rcu465 = (typeof(*(sk->sk_wq)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_466(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((sk->sk_wq)) == sizeof(char) || sizeof((sk->sk_wq)) == sizeof(short) || sizeof((sk->sk_wq)) == sizeof(int) || sizeof((sk->sk_wq)) == sizeof(long)) || sizeof((sk->sk_wq)) == sizeof(long long))) __compiletime_assert_466(); } while (0); (*(const volatile typeof( _Generic(((sk->sk_wq)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((sk->sk_wq)))) *)&((sk->sk_wq))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/sock.h", 2448, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(sk->sk_wq)) *)(__UNIQUE_ID_rcu465)); }), how, band);
}
# 2461 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_stream_moderate_sndbuf(struct sock *sk)
{
 u32 val;

 if (sk->sk_userlocks & 1)
  return;

 val = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((sk->sk_sndbuf) - (sk->sk_wmem_queued >> 1)) * 0l)) : (int *)8))), ((sk->sk_sndbuf) < (sk->sk_wmem_queued >> 1) ? (sk->sk_sndbuf) : (sk->sk_wmem_queued >> 1)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(sk->sk_sndbuf))(-1)) < ( typeof(sk->sk_sndbuf))1)) * 0l)) : (int *)8))), (((typeof(sk->sk_sndbuf))(-1)) < ( typeof(sk->sk_sndbuf))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(sk->sk_wmem_queued >> 1))(-1)) < ( typeof(sk->sk_wmem_queued >> 1))1)) * 0l)) : (int *)8))), (((typeof(sk->sk_wmem_queued >> 1))(-1)) < ( typeof(sk->sk_wmem_queued >> 1))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((sk->sk_sndbuf) + 0))(-1)) < ( typeof((sk->sk_sndbuf) + 0))1)) * 0l)) : (int *)8))), (((typeof((sk->sk_sndbuf) + 0))(-1)) < ( typeof((sk->sk_sndbuf) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((sk->sk_wmem_queued >> 1) + 0))(-1)) < ( typeof((sk->sk_wmem_queued >> 1) + 0))1)) * 0l)) : (int *)8))), (((typeof((sk->sk_wmem_queued >> 1) + 0))(-1)) < ( typeof((sk->sk_wmem_queued >> 1) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(sk->sk_sndbuf) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(sk->sk_sndbuf))(-1)) < ( typeof(sk->sk_sndbuf))1)) * 0l)) : (int *)8))), (((typeof(sk->sk_sndbuf))(-1)) < ( typeof(sk->sk_sndbuf))1), 0), sk->sk_sndbuf, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(sk->sk_wmem_queued >> 1) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(sk->sk_wmem_queued >> 1))(-1)) < ( typeof(sk->sk_wmem_queued >> 1))1)) * 0l)) : (int *)8))), (((typeof(sk->sk_wmem_queued >> 1))(-1)) < ( typeof(sk->sk_wmem_queued >> 1))1), 0), sk->sk_wmem_queued >> 1, -1) >= 0)), "min" "(" "sk->sk_sndbuf" ", " "sk->sk_wmem_queued >> 1" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_467 = (sk->sk_sndbuf); __auto_type __UNIQUE_ID_y_468 = (sk->sk_wmem_queued >> 1); ((__UNIQUE_ID_x_467) < (__UNIQUE_ID_y_468) ? (__UNIQUE_ID_x_467) : (__UNIQUE_ID_y_468)); }); }));
 val = ({ u32 __UNIQUE_ID_x_469 = (val); u32 __UNIQUE_ID_y_470 = (sk_unused_reserved_mem(sk)); ((__UNIQUE_ID_x_469) > (__UNIQUE_ID_y_470) ? (__UNIQUE_ID_x_469) : (__UNIQUE_ID_y_470)); });

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_473(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_sndbuf) == sizeof(char) || sizeof(sk->sk_sndbuf) == sizeof(short) || sizeof(sk->sk_sndbuf) == sizeof(int) || sizeof(sk->sk_sndbuf) == sizeof(long)) || sizeof(sk->sk_sndbuf) == sizeof(long long))) __compiletime_assert_473(); } while (0); do { *(volatile typeof(sk->sk_sndbuf) *)&(sk->sk_sndbuf) = (({ u32 __UNIQUE_ID_x_471 = (val); u32 __UNIQUE_ID_y_472 = (((2048 + ((((sizeof(struct sk_buff))) + ((__typeof__((sizeof(struct sk_buff))))(((1 << (5)))) - 1)) & ~((__typeof__((sizeof(struct sk_buff))))(((1 << (5)))) - 1))) * 2)); ((__UNIQUE_ID_x_471) > (__UNIQUE_ID_y_472) ? (__UNIQUE_ID_x_471) : (__UNIQUE_ID_y_472)); })); } while (0); } while (0);
}
# 2490 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page_frag *sk_page_frag(struct sock *sk)
{
 if (sk->sk_use_task_frag)
  return &(__current_thread_info->task)->task_frag;

 return &sk->sk_frag;
}

bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sock_writeable(const struct sock *sk)
{
 return refcount_read(&sk->sk_wmem_alloc) < (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_474(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_sndbuf) == sizeof(char) || sizeof(sk->sk_sndbuf) == sizeof(short) || sizeof(sk->sk_sndbuf) == sizeof(int) || sizeof(sk->sk_sndbuf) == sizeof(long)) || sizeof(sk->sk_sndbuf) == sizeof(long long))) __compiletime_assert_474(); } while (0); (*(const volatile typeof( _Generic((sk->sk_sndbuf), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_sndbuf))) *)&(sk->sk_sndbuf)); }) >> 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t gfp_any(void)
{
 return ((preempt_count() & (((1UL << (8))-1) << (0 + 8)))) ? ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) : ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) gfp_t gfp_memcg_charge(void)
{
 return ((preempt_count() & (((1UL << (8))-1) << (0 + 8)))) ? ((( gfp_t)((((1UL))) << (___GFP_HIGH_BIT)))|(( gfp_t)((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) : ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long sock_rcvtimeo(const struct sock *sk, bool noblock)
{
 return noblock ? 0 : sk->sk_rcvtimeo;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) long sock_sndtimeo(const struct sock *sk, bool noblock)
{
 return noblock ? 0 : sk->sk_sndtimeo;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sock_rcvlowat(const struct sock *sk, int waitall, int len)
{
 int v = waitall ? len : ({ int __UNIQUE_ID_x_476 = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_475(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_rcvlowat) == sizeof(char) || sizeof(sk->sk_rcvlowat) == sizeof(short) || sizeof(sk->sk_rcvlowat) == sizeof(int) || sizeof(sk->sk_rcvlowat) == sizeof(long)) || sizeof(sk->sk_rcvlowat) == sizeof(long long))) __compiletime_assert_475(); } while (0); (*(const volatile typeof( _Generic((sk->sk_rcvlowat), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_rcvlowat))) *)&(sk->sk_rcvlowat)); })); int __UNIQUE_ID_y_477 = (len); ((__UNIQUE_ID_x_476) < (__UNIQUE_ID_y_477) ? (__UNIQUE_ID_x_476) : (__UNIQUE_ID_y_477)); });

 return v ?: 1;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sock_intr_errno(long timeo)
{
 return timeo == ((long)(~0UL >> 1)) ? -512 : -4;
}

struct sock_skb_cb {
 u32 dropcount;
};
# 2560 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
sock_skb_set_dropcount(const struct sock *sk, struct sk_buff *skb)
{
 ((struct sock_skb_cb *)((skb)->cb + ((sizeof((((struct sk_buff *)0)->cb)) - sizeof(struct sock_skb_cb)))))->dropcount = sock_flag(sk, SOCK_RXQ_OVFL) ?
      atomic_read(&sk->sk_drops) : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_drops_add(struct sock *sk, const struct sk_buff *skb)
{
 int segs = ({ u16 __UNIQUE_ID_x_478 = (1); u16 __UNIQUE_ID_y_479 = (((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_segs); ((__UNIQUE_ID_x_478) > (__UNIQUE_ID_y_479) ? (__UNIQUE_ID_x_478) : (__UNIQUE_ID_y_479)); });

 atomic_add(segs, &sk->sk_drops);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) ktime_t sock_read_timestamp(struct sock *sk)
{

 unsigned int seq;
 ktime_t kt;

 do {
  seq = read_seqbegin(&sk->sk_stamp_seq);
  kt = sk->sk_stamp;
 } while (read_seqretry(&sk->sk_stamp_seq, seq));

 return kt;



}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_write_timestamp(struct sock *sk, ktime_t kt)
{

 write_seqlock(&sk->sk_stamp_seq);
 sk->sk_stamp = kt;
 write_sequnlock(&sk->sk_stamp_seq);



}

void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk,
      struct sk_buff *skb);
void __sock_recv_wifi_status(struct msghdr *msg, struct sock *sk,
        struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
sock_recv_timestamp(struct msghdr *msg, struct sock *sk, struct sk_buff *skb)
{
 struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
 u32 tsflags = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_480(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_tsflags) == sizeof(char) || sizeof(sk->sk_tsflags) == sizeof(short) || sizeof(sk->sk_tsflags) == sizeof(int) || sizeof(sk->sk_tsflags) == sizeof(long)) || sizeof(sk->sk_tsflags) == sizeof(long long))) __compiletime_assert_480(); } while (0); (*(const volatile typeof( _Generic((sk->sk_tsflags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_tsflags))) *)&(sk->sk_tsflags)); });
 ktime_t kt = skb->tstamp;






 if (sock_flag(sk, SOCK_RCVTSTAMP) ||
     (tsflags & SOF_TIMESTAMPING_RX_SOFTWARE) ||
     (kt && tsflags & SOF_TIMESTAMPING_SOFTWARE) ||
     (hwtstamps->hwtstamp &&
      (tsflags & SOF_TIMESTAMPING_RAW_HARDWARE)))
  __sock_recv_timestamp(msg, sk, skb);
 else
  sock_write_timestamp(sk, kt);

 if (sock_flag(sk, SOCK_WIFI_STATUS) && skb_wifi_acked_valid(skb))
  __sock_recv_wifi_status(msg, sk, skb);
}

void __sock_recv_cmsgs(struct msghdr *msg, struct sock *sk,
         struct sk_buff *skb);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_recv_cmsgs(struct msghdr *msg, struct sock *sk,
       struct sk_buff *skb)
{






 if (sk->__sk_common.skc_flags & ((1UL << SOCK_RXQ_OVFL) | (1UL << SOCK_RCVTSTAMP) | (1UL << SOCK_RCVMARK)) ||
     ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_481(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_tsflags) == sizeof(char) || sizeof(sk->sk_tsflags) == sizeof(short) || sizeof(sk->sk_tsflags) == sizeof(int) || sizeof(sk->sk_tsflags) == sizeof(long)) || sizeof(sk->sk_tsflags) == sizeof(long long))) __compiletime_assert_481(); } while (0); (*(const volatile typeof( _Generic((sk->sk_tsflags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_tsflags))) *)&(sk->sk_tsflags)); }) & (SOF_TIMESTAMPING_SOFTWARE | SOF_TIMESTAMPING_RAW_HARDWARE))
  __sock_recv_cmsgs(msg, sk, skb);
 else if (__builtin_expect(!!(sock_flag(sk, SOCK_TIMESTAMP)), 0))
  sock_write_timestamp(sk, skb->tstamp);
 else if (__builtin_expect(!!(sock_read_timestamp(sk) == (-1L * 1000000000L)), 0))
  sock_write_timestamp(sk, 0);
}

void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags);
# 2665 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void _sock_tx_timestamp(struct sock *sk, __u16 tsflags,
          __u8 *tx_flags, __u32 *tskey)
{
 if (__builtin_expect(!!(tsflags), 0)) {
  __sock_tx_timestamp(tsflags, tx_flags);
  if (tsflags & SOF_TIMESTAMPING_OPT_ID && tskey &&
      tsflags & (SOF_TIMESTAMPING_TX_HARDWARE | SOF_TIMESTAMPING_TX_SOFTWARE | SOF_TIMESTAMPING_TX_SCHED | SOF_TIMESTAMPING_TX_ACK))
   *tskey = atomic_inc_return(&sk->sk_tskey) - 1;
 }
 if (__builtin_expect(!!(sock_flag(sk, SOCK_WIFI_STATUS)), 0))
  *tx_flags |= SKBTX_WIFI_STATUS;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sock_tx_timestamp(struct sock *sk, __u16 tsflags,
         __u8 *tx_flags)
{
 _sock_tx_timestamp(sk, tsflags, tx_flags, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void skb_setup_tx_timestamp(struct sk_buff *skb, __u16 tsflags)
{
 _sock_tx_timestamp(skb->sk, tsflags, &((struct skb_shared_info *)(skb_end_pointer(skb)))->tx_flags,
      &((struct skb_shared_info *)(skb_end_pointer(skb)))->tskey);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_is_inet(const struct sock *sk)
{
 int family = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_482(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_family) == sizeof(char) || sizeof(sk->__sk_common.skc_family) == sizeof(short) || sizeof(sk->__sk_common.skc_family) == sizeof(int) || sizeof(sk->__sk_common.skc_family) == sizeof(long)) || sizeof(sk->__sk_common.skc_family) == sizeof(long long))) __compiletime_assert_482(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_family), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_family))) *)&(sk->__sk_common.skc_family)); });

 return family == 2 || family == 10;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_is_tcp(const struct sock *sk)
{
 return sk_is_inet(sk) &&
        sk->sk_type == SOCK_STREAM &&
        sk->sk_protocol == IPPROTO_TCP;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_is_udp(const struct sock *sk)
{
 return sk_is_inet(sk) &&
        sk->sk_type == SOCK_DGRAM &&
        sk->sk_protocol == IPPROTO_UDP;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_is_stream_unix(const struct sock *sk)
{
 return sk->__sk_common.skc_family == 1 && sk->sk_type == SOCK_STREAM;
}
# 2724 "../include/net/sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_eat_skb(struct sock *sk, struct sk_buff *skb)
{
 __skb_unlink(skb, &sk->sk_receive_queue);
 __kfree_skb(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
skb_sk_is_prefetched(struct sk_buff *skb)
{

 return skb->destructor == sock_pfree;



}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_fullsock(const struct sock *sk)
{
 return (1 << sk->__sk_common.skc_state) & ~(TCPF_TIME_WAIT | TCPF_NEW_SYN_RECV);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
sk_is_refcounted(struct sock *sk)
{

 return !sk_fullsock(sk) || !sock_flag(sk, SOCK_RCU_FREE);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *sk_validate_xmit_skb(struct sk_buff *skb,
         struct net_device *dev)
{

 struct sock *sk = skb->sk;

 if (sk && sk_fullsock(sk) && sk->sk_validate_xmit_skb) {
  skb = sk->sk_validate_xmit_skb(sk, dev, skb);
 } else if (__builtin_expect(!!(skb_is_decrypted(skb)), 0)) {
  ({ if (0) _printk("\001" "4" "unencrypted skb with no associated socket - dropping\n"); 0; });
  kfree_skb(skb);
  skb = ((void *)0);
 }


 return skb;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_listener(const struct sock *sk)
{
 return (1 << sk->__sk_common.skc_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV);
}

void sock_enable_timestamp(struct sock *sk, enum sock_flags flag);
int sock_recv_errqueue(struct sock *sk, struct msghdr *msg, int len, int level,
         int type);

bool sk_ns_capable(const struct sock *sk,
     struct user_namespace *user_ns, int cap);
bool sk_capable(const struct sock *sk, int cap);
bool sk_net_capable(const struct sock *sk, int cap);

void sk_get_meminfo(const struct sock *sk, u32 *meminfo);
# 2806 "../include/net/sock.h"
extern __u32 sysctl_wmem_max;
extern __u32 sysctl_rmem_max;

extern int sysctl_tstamp_allow_data;

extern __u32 sysctl_wmem_default;
extern __u32 sysctl_rmem_default;


extern struct static_key_false net_high_order_alloc_disable_key;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_get_wmem0(const struct sock *sk, const struct proto *proto)
{

 if (proto->sysctl_wmem_offset)
  return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_483(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)) == sizeof(char) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)) == sizeof(short) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)) == sizeof(int) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)) == sizeof(long)) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)) == sizeof(long long))) __compiletime_assert_483(); } while (0); (*(const volatile typeof( _Generic((*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset)))) *)&(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset))); });

 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_484(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*proto->sysctl_wmem) == sizeof(char) || sizeof(*proto->sysctl_wmem) == sizeof(short) || sizeof(*proto->sysctl_wmem) == sizeof(int) || sizeof(*proto->sysctl_wmem) == sizeof(long)) || sizeof(*proto->sysctl_wmem) == sizeof(long long))) __compiletime_assert_484(); } while (0); (*(const volatile typeof( _Generic((*proto->sysctl_wmem), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*proto->sysctl_wmem))) *)&(*proto->sysctl_wmem)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int sk_get_rmem0(const struct sock *sk, const struct proto *proto)
{

 if (proto->sysctl_rmem_offset)
  return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_485(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)) == sizeof(char) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)) == sizeof(short) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)) == sizeof(int) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)) == sizeof(long)) || sizeof(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)) == sizeof(long long))) __compiletime_assert_485(); } while (0); (*(const volatile typeof( _Generic((*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset)))) *)&(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset))); });

 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_486(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*proto->sysctl_rmem) == sizeof(char) || sizeof(*proto->sysctl_rmem) == sizeof(short) || sizeof(*proto->sysctl_rmem) == sizeof(int) || sizeof(*proto->sysctl_rmem) == sizeof(long)) || sizeof(*proto->sysctl_rmem) == sizeof(long long))) __compiletime_assert_486(); } while (0); (*(const volatile typeof( _Generic((*proto->sysctl_rmem), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*proto->sysctl_rmem))) *)&(*proto->sysctl_rmem)); });
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void sk_pacing_shift_update(struct sock *sk, int val)
{
 if (!sk || !sk_fullsock(sk) || ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_487(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_pacing_shift) == sizeof(char) || sizeof(sk->sk_pacing_shift) == sizeof(short) || sizeof(sk->sk_pacing_shift) == sizeof(int) || sizeof(sk->sk_pacing_shift) == sizeof(long)) || sizeof(sk->sk_pacing_shift) == sizeof(long long))) __compiletime_assert_487(); } while (0); (*(const volatile typeof( _Generic((sk->sk_pacing_shift), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_pacing_shift))) *)&(sk->sk_pacing_shift)); }) == val)
  return;
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_488(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_pacing_shift) == sizeof(char) || sizeof(sk->sk_pacing_shift) == sizeof(short) || sizeof(sk->sk_pacing_shift) == sizeof(int) || sizeof(sk->sk_pacing_shift) == sizeof(long)) || sizeof(sk->sk_pacing_shift) == sizeof(long long))) __compiletime_assert_488(); } while (0); do { *(volatile typeof(sk->sk_pacing_shift) *)&(sk->sk_pacing_shift) = (val); } while (0); } while (0);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_dev_equal_l3scope(struct sock *sk, int dif)
{
 int bound_dev_if = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_489(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(char) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(short) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(int) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(long)) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(long long))) __compiletime_assert_489(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_bound_dev_if), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_bound_dev_if))) *)&(sk->__sk_common.skc_bound_dev_if)); });
 int mdif;

 if (!bound_dev_if || bound_dev_if == dif)
  return true;

 mdif = l3mdev_master_ifindex_by_index(sock_net(sk), dif);
 if (mdif && mdif == bound_dev_if)
  return true;

 return false;
}

void sock_def_readable(struct sock *sk);

int sock_bindtoindex(struct sock *sk, int ifindex, bool lock_sk);
void sock_set_timestamp(struct sock *sk, int optname, bool valbool);
int sock_set_timestamping(struct sock *sk, int optname,
     struct so_timestamping timestamping);

void sock_enable_timestamps(struct sock *sk);
void sock_no_linger(struct sock *sk);
void sock_set_keepalive(struct sock *sk);
void sock_set_priority(struct sock *sk, u32 priority);
void sock_set_rcvbuf(struct sock *sk, int val);
void sock_set_mark(struct sock *sk, u32 val);
void sock_set_reuseaddr(struct sock *sk);
void sock_set_reuseport(struct sock *sk);
void sock_set_sndtimeo(struct sock *sk, s64 secs);

int sock_bind_add(struct sock *sk, struct sockaddr *addr, int addr_len);

int sock_get_timeout(long timeo, void *optval, bool old_timeval);
int sock_copy_user_timeval(struct __kernel_sock_timeval *tv,
      sockptr_t optval, int optlen, bool old_timeval);

int sock_ioctl_inout(struct sock *sk, unsigned int cmd,
       void *arg, void *karg, size_t size);
int sk_ioctl(struct sock *sk, unsigned int cmd, void *arg);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sk_is_readable(struct sock *sk)
{
 if (sk->__sk_common.skc_prot->sock_is_readable)
  return sk->__sk_common.skc_prot->sock_is_readable(sk);
 return false;
}
# 20 "../include/linux/tcp.h" 2
# 1 "../include/net/inet_connection_sock.h" 1
# 21 "../include/net/inet_connection_sock.h"
# 1 "../include/net/inet_sock.h" 1
# 18 "../include/net/inet_sock.h"
# 1 "../include/linux/jhash.h" 1
# 27 "../include/linux/jhash.h"
# 1 "../include/linux/unaligned/packed_struct.h" 1





struct __una_u16 { u16 x; } __attribute__((__packed__));
struct __una_u32 { u32 x; } __attribute__((__packed__));
struct __una_u64 { u64 x; } __attribute__((__packed__));

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 __get_unaligned_cpu16(const void *p)
{
 const struct __una_u16 *ptr = (const struct __una_u16 *)p;
 return ptr->x;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __get_unaligned_cpu32(const void *p)
{
 const struct __una_u32 *ptr = (const struct __una_u32 *)p;
 return ptr->x;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 __get_unaligned_cpu64(const void *p)
{
 const struct __una_u64 *ptr = (const struct __una_u64 *)p;
 return ptr->x;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __put_unaligned_cpu16(u16 val, void *p)
{
 struct __una_u16 *ptr = (struct __una_u16 *)p;
 ptr->x = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __put_unaligned_cpu32(u32 val, void *p)
{
 struct __una_u32 *ptr = (struct __una_u32 *)p;
 ptr->x = val;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __put_unaligned_cpu64(u64 val, void *p)
{
 struct __una_u64 *ptr = (struct __una_u64 *)p;
 ptr->x = val;
}
# 28 "../include/linux/jhash.h" 2
# 70 "../include/linux/jhash.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 jhash(const void *key, u32 length, u32 initval)
{
 u32 a, b, c;
 const u8 *k = key;


 a = b = c = 0xdeadbeef + length + initval;


 while (length > 12) {
  a += __get_unaligned_cpu32(k);
  b += __get_unaligned_cpu32(k + 4);
  c += __get_unaligned_cpu32(k + 8);
  { a -= c; a ^= rol32(c, 4); c += b; b -= a; b ^= rol32(a, 6); a += c; c -= b; c ^= rol32(b, 8); b += a; a -= c; a ^= rol32(c, 16); c += b; b -= a; b ^= rol32(a, 19); a += c; c -= b; c ^= rol32(b, 4); b += a; };
  length -= 12;
  k += 12;
 }

 switch (length) {
 case 12: c += (u32)k[11]<<24; __attribute__((__fallthrough__));
 case 11: c += (u32)k[10]<<16; __attribute__((__fallthrough__));
 case 10: c += (u32)k[9]<<8; __attribute__((__fallthrough__));
 case 9: c += k[8]; __attribute__((__fallthrough__));
 case 8: b += (u32)k[7]<<24; __attribute__((__fallthrough__));
 case 7: b += (u32)k[6]<<16; __attribute__((__fallthrough__));
 case 6: b += (u32)k[5]<<8; __attribute__((__fallthrough__));
 case 5: b += k[4]; __attribute__((__fallthrough__));
 case 4: a += (u32)k[3]<<24; __attribute__((__fallthrough__));
 case 3: a += (u32)k[2]<<16; __attribute__((__fallthrough__));
 case 2: a += (u32)k[1]<<8; __attribute__((__fallthrough__));
 case 1: a += k[0];
   { c ^= b; c -= rol32(b, 14); a ^= c; a -= rol32(c, 11); b ^= a; b -= rol32(a, 25); c ^= b; c -= rol32(b, 16); a ^= c; a -= rol32(c, 4); b ^= a; b -= rol32(a, 14); c ^= b; c -= rol32(b, 24); };
   break;
 case 0:
  break;
 }

 return c;
}
# 117 "../include/linux/jhash.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 jhash2(const u32 *k, u32 length, u32 initval)
{
 u32 a, b, c;


 a = b = c = 0xdeadbeef + (length<<2) + initval;


 while (length > 3) {
  a += k[0];
  b += k[1];
  c += k[2];
  { a -= c; a ^= rol32(c, 4); c += b; b -= a; b ^= rol32(a, 6); a += c; c -= b; c ^= rol32(b, 8); b += a; a -= c; a ^= rol32(c, 16); c += b; b -= a; b ^= rol32(a, 19); a += c; c -= b; c ^= rol32(b, 4); b += a; };
  length -= 3;
  k += 3;
 }


 switch (length) {
 case 3: c += k[2]; __attribute__((__fallthrough__));
 case 2: b += k[1]; __attribute__((__fallthrough__));
 case 1: a += k[0];
  { c ^= b; c -= rol32(b, 14); a ^= c; a -= rol32(c, 11); b ^= a; b -= rol32(a, 25); c ^= b; c -= rol32(b, 16); a ^= c; a -= rol32(c, 4); b ^= a; b -= rol32(a, 14); c ^= b; c -= rol32(b, 24); };
  break;
 case 0:
  break;
 }

 return c;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __jhash_nwords(u32 a, u32 b, u32 c, u32 initval)
{
 a += initval;
 b += initval;
 c += initval;

 { c ^= b; c -= rol32(b, 14); a ^= c; a -= rol32(c, 11); b ^= a; b -= rol32(a, 25); c ^= b; c -= rol32(b, 16); a ^= c; a -= rol32(c, 4); b ^= a; b -= rol32(a, 14); c ^= b; c -= rol32(b, 24); };

 return c;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 jhash_3words(u32 a, u32 b, u32 c, u32 initval)
{
 return __jhash_nwords(a, b, c, initval + 0xdeadbeef + (3 << 2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 jhash_2words(u32 a, u32 b, u32 initval)
{
 return __jhash_nwords(a, b, 0, initval + 0xdeadbeef + (2 << 2));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 jhash_1word(u32 a, u32 initval)
{
 return __jhash_nwords(a, 0, 0, initval + 0xdeadbeef + (1 << 2));
}
# 19 "../include/net/inet_sock.h" 2




# 1 "../include/net/request_sock.h" 1
# 21 "../include/net/request_sock.h"
# 1 "../include/net/rstreason.h" 1





# 1 "../include/uapi/linux/mptcp.h" 1
# 29 "../include/uapi/linux/mptcp.h"
# 1 "../include/uapi/linux/mptcp_pm.h" 1
# 41 "../include/uapi/linux/mptcp_pm.h"
enum mptcp_event_type {
 MPTCP_EVENT_UNSPEC,
 MPTCP_EVENT_CREATED,
 MPTCP_EVENT_ESTABLISHED,
 MPTCP_EVENT_CLOSED,
 MPTCP_EVENT_ANNOUNCED = 6,
 MPTCP_EVENT_REMOVED,
 MPTCP_EVENT_SUB_ESTABLISHED = 10,
 MPTCP_EVENT_SUB_CLOSED,
 MPTCP_EVENT_SUB_PRIORITY = 13,
 MPTCP_EVENT_LISTENER_CREATED = 15,
 MPTCP_EVENT_LISTENER_CLOSED,
};

enum {
 MPTCP_PM_ADDR_ATTR_UNSPEC,
 MPTCP_PM_ADDR_ATTR_FAMILY,
 MPTCP_PM_ADDR_ATTR_ID,
 MPTCP_PM_ADDR_ATTR_ADDR4,
 MPTCP_PM_ADDR_ATTR_ADDR6,
 MPTCP_PM_ADDR_ATTR_PORT,
 MPTCP_PM_ADDR_ATTR_FLAGS,
 MPTCP_PM_ADDR_ATTR_IF_IDX,

 __MPTCP_PM_ADDR_ATTR_MAX
};


enum {
 MPTCP_SUBFLOW_ATTR_UNSPEC,
 MPTCP_SUBFLOW_ATTR_TOKEN_REM,
 MPTCP_SUBFLOW_ATTR_TOKEN_LOC,
 MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ,
 MPTCP_SUBFLOW_ATTR_MAP_SEQ,
 MPTCP_SUBFLOW_ATTR_MAP_SFSEQ,
 MPTCP_SUBFLOW_ATTR_SSN_OFFSET,
 MPTCP_SUBFLOW_ATTR_MAP_DATALEN,
 MPTCP_SUBFLOW_ATTR_FLAGS,
 MPTCP_SUBFLOW_ATTR_ID_REM,
 MPTCP_SUBFLOW_ATTR_ID_LOC,
 MPTCP_SUBFLOW_ATTR_PAD,

 __MPTCP_SUBFLOW_ATTR_MAX
};


enum {
 MPTCP_PM_ENDPOINT_ADDR = 1,

 __MPTCP_PM_ENDPOINT_MAX
};


enum {
 MPTCP_PM_ATTR_UNSPEC,
 MPTCP_PM_ATTR_ADDR,
 MPTCP_PM_ATTR_RCV_ADD_ADDRS,
 MPTCP_PM_ATTR_SUBFLOWS,
 MPTCP_PM_ATTR_TOKEN,
 MPTCP_PM_ATTR_LOC_ID,
 MPTCP_PM_ATTR_ADDR_REMOTE,

 __MPTCP_ATTR_AFTER_LAST
};


enum mptcp_event_attr {
 MPTCP_ATTR_UNSPEC,
 MPTCP_ATTR_TOKEN,
 MPTCP_ATTR_FAMILY,
 MPTCP_ATTR_LOC_ID,
 MPTCP_ATTR_REM_ID,
 MPTCP_ATTR_SADDR4,
 MPTCP_ATTR_SADDR6,
 MPTCP_ATTR_DADDR4,
 MPTCP_ATTR_DADDR6,
 MPTCP_ATTR_SPORT,
 MPTCP_ATTR_DPORT,
 MPTCP_ATTR_BACKUP,
 MPTCP_ATTR_ERROR,
 MPTCP_ATTR_FLAGS,
 MPTCP_ATTR_TIMEOUT,
 MPTCP_ATTR_IF_IDX,
 MPTCP_ATTR_RESET_REASON,
 MPTCP_ATTR_RESET_FLAGS,
 MPTCP_ATTR_SERVER_SIDE,

 __MPTCP_ATTR_MAX
};


enum {
 MPTCP_PM_CMD_UNSPEC,
 MPTCP_PM_CMD_ADD_ADDR,
 MPTCP_PM_CMD_DEL_ADDR,
 MPTCP_PM_CMD_GET_ADDR,
 MPTCP_PM_CMD_FLUSH_ADDRS,
 MPTCP_PM_CMD_SET_LIMITS,
 MPTCP_PM_CMD_GET_LIMITS,
 MPTCP_PM_CMD_SET_FLAGS,
 MPTCP_PM_CMD_ANNOUNCE,
 MPTCP_PM_CMD_REMOVE,
 MPTCP_PM_CMD_SUBFLOW_CREATE,
 MPTCP_PM_CMD_SUBFLOW_DESTROY,

 __MPTCP_PM_CMD_AFTER_LAST
};
# 30 "../include/uapi/linux/mptcp.h" 2
# 40 "../include/uapi/linux/mptcp.h"
struct mptcp_info {
 __u8 mptcpi_subflows;
 __u8 mptcpi_add_addr_signal;
 __u8 mptcpi_add_addr_accepted;
 __u8 mptcpi_subflows_max;
 __u8 mptcpi_add_addr_signal_max;
 __u8 mptcpi_add_addr_accepted_max;
 __u32 mptcpi_flags;
 __u32 mptcpi_token;
 __u64 mptcpi_write_seq;
 __u64 mptcpi_snd_una;
 __u64 mptcpi_rcv_nxt;
 __u8 mptcpi_local_addr_used;
 __u8 mptcpi_local_addr_max;
 __u8 mptcpi_csum_enabled;
 __u32 mptcpi_retransmits;
 __u64 mptcpi_bytes_retrans;
 __u64 mptcpi_bytes_sent;
 __u64 mptcpi_bytes_received;
 __u64 mptcpi_bytes_acked;
 __u8 mptcpi_subflows_total;
 __u8 reserved[3];
 __u32 mptcpi_last_data_sent;
 __u32 mptcpi_last_data_recv;
 __u32 mptcpi_last_ack_recv;
};
# 76 "../include/uapi/linux/mptcp.h"
struct mptcp_subflow_data {
 __u32 size_subflow_data;
 __u32 num_subflows;
 __u32 size_kernel;
 __u32 size_user;
} __attribute__((aligned(8)));

struct mptcp_subflow_addrs {
 union {
  __kernel_sa_family_t sa_family;
  struct sockaddr sa_local;
  struct sockaddr_in sin_local;
  struct sockaddr_in6 sin6_local;
  struct __kernel_sockaddr_storage ss_local;
 };
 union {
  struct sockaddr sa_remote;
  struct sockaddr_in sin_remote;
  struct sockaddr_in6 sin6_remote;
  struct __kernel_sockaddr_storage ss_remote;
 };
};

struct mptcp_subflow_info {
 __u32 id;
 struct mptcp_subflow_addrs addrs;
};

struct mptcp_full_info {
 __u32 size_tcpinfo_kernel;
 __u32 size_tcpinfo_user;
 __u32 size_sfinfo_kernel;
 __u32 size_sfinfo_user;
 __u32 num_subflows;
 __u32 size_arrays_user;






 __u64 __attribute__((aligned(8))) subflow_info;
 __u64 __attribute__((aligned(8))) tcp_info;
 struct mptcp_info mptcp_info;
};
# 7 "../include/net/rstreason.h" 2
# 40 "../include/net/rstreason.h"
enum sk_rst_reason {





 SK_RST_REASON_NOT_SPECIFIED,

 SK_RST_REASON_NO_SOCKET,





 SK_RST_REASON_TCP_INVALID_ACK_SEQUENCE,




 SK_RST_REASON_TCP_RFC7323_PAWS,

 SK_RST_REASON_TCP_TOO_OLD_ACK,




 SK_RST_REASON_TCP_ACK_UNSENT_DATA,

 SK_RST_REASON_TCP_FLAGS,

 SK_RST_REASON_TCP_OLD_ACK,




 SK_RST_REASON_TCP_ABORT_ON_DATA,



 SK_RST_REASON_TCP_TIMEWAIT_SOCKET,






 SK_RST_REASON_INVALID_SYN,
# 99 "../include/net/rstreason.h"
 SK_RST_REASON_MPTCP_RST_EUNSPEC,






 SK_RST_REASON_MPTCP_RST_EMPTCP,





 SK_RST_REASON_MPTCP_RST_ERESOURCE,





 SK_RST_REASON_MPTCP_RST_EPROHIBIT,
# 128 "../include/net/rstreason.h"
 SK_RST_REASON_MPTCP_RST_EWQ2BIG,






 SK_RST_REASON_MPTCP_RST_EBADPERF,






 SK_RST_REASON_MPTCP_RST_EMIDDLEBOX,


 SK_RST_REASON_ERROR,





 SK_RST_REASON_MAX,
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum sk_rst_reason
sk_rst_convert_drop_reason(enum skb_drop_reason reason)
{
 switch (reason) {
 case SKB_DROP_REASON_NOT_SPECIFIED:
  return SK_RST_REASON_NOT_SPECIFIED;
 case SKB_DROP_REASON_NO_SOCKET:
  return SK_RST_REASON_NO_SOCKET;
 case SKB_DROP_REASON_TCP_INVALID_ACK_SEQUENCE:
  return SK_RST_REASON_TCP_INVALID_ACK_SEQUENCE;
 case SKB_DROP_REASON_TCP_RFC7323_PAWS:
  return SK_RST_REASON_TCP_RFC7323_PAWS;
 case SKB_DROP_REASON_TCP_TOO_OLD_ACK:
  return SK_RST_REASON_TCP_TOO_OLD_ACK;
 case SKB_DROP_REASON_TCP_ACK_UNSENT_DATA:
  return SK_RST_REASON_TCP_ACK_UNSENT_DATA;
 case SKB_DROP_REASON_TCP_FLAGS:
  return SK_RST_REASON_TCP_FLAGS;
 case SKB_DROP_REASON_TCP_OLD_ACK:
  return SK_RST_REASON_TCP_OLD_ACK;
 case SKB_DROP_REASON_TCP_ABORT_ON_DATA:
  return SK_RST_REASON_TCP_ABORT_ON_DATA;
 default:

  return SK_RST_REASON_NOT_SPECIFIED;
 }
}
# 22 "../include/net/request_sock.h" 2

struct request_sock;
struct sk_buff;
struct dst_entry;
struct proto;

struct request_sock_ops {
 int family;
 unsigned int obj_size;
 struct kmem_cache *slab;
 char *slab_name;
 int (*rtx_syn_ack)(const struct sock *sk,
           struct request_sock *req);
 void (*send_ack)(const struct sock *sk, struct sk_buff *skb,
        struct request_sock *req);
 void (*send_reset)(const struct sock *sk,
          struct sk_buff *skb,
          enum sk_rst_reason reason);
 void (*destructor)(struct request_sock *req);
 void (*syn_ack_timeout)(const struct request_sock *req);
};

int inet_rtx_syn_ack(const struct sock *parent, struct request_sock *req);

struct saved_syn {
 u32 mac_hdrlen;
 u32 network_hdrlen;
 u32 tcp_hdrlen;
 u8 data[];
};



struct request_sock {
 struct sock_common __req_common;






 struct request_sock *dl_next;
 u16 mss;
 u8 num_retrans;
 u8 syncookie:1;




 u8 num_timeout:7;
 u32 ts_recent;
 struct timer_list rsk_timer;
 const struct request_sock_ops *rsk_ops;
 struct sock *sk;
 struct saved_syn *saved_syn;
 u32 secid;
 u32 peer_secid;
 u32 timeout;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct request_sock *inet_reqsk(const struct sock *sk)
{
 return (struct request_sock *)sk;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *req_to_sk(struct request_sock *req)
{
 return (struct sock *)req;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *skb_steal_sock(struct sk_buff *skb,
       bool *refcounted, bool *prefetched)
{
 struct sock *sk = skb->sk;

 if (!sk) {
  *prefetched = false;
  *refcounted = false;
  return ((void *)0);
 }

 *prefetched = skb_sk_is_prefetched(skb);
 if (*prefetched) {

  if (sk->__sk_common.skc_state == TCP_NEW_SYN_RECV && inet_reqsk(sk)->syncookie) {
   struct request_sock *req = inet_reqsk(sk);

   *refcounted = false;
   sk = req->__req_common.skc_listener;
   req->__req_common.skc_listener = ((void *)0);
   return sk;
  }

  *refcounted = sk_is_refcounted(sk);
 } else {
  *refcounted = true;
 }

 skb->destructor = ((void *)0);
 skb->sk = ((void *)0);
 return sk;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __reqsk_free(struct request_sock *req)
{
 req->rsk_ops->destructor(req);
 if (req->__req_common.skc_listener)
  sock_put(req->__req_common.skc_listener);
 kfree(req->saved_syn);
 kmem_cache_free(req->rsk_ops->slab, req);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reqsk_free(struct request_sock *req)
{
 (void)({ bool __ret_do_once = !!(refcount_read(&req->__req_common.skc_refcnt) != 0); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/net/request_sock.h", 142, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 __reqsk_free(req);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reqsk_put(struct request_sock *req)
{
 if (refcount_dec_and_test(&req->__req_common.skc_refcnt))
  __reqsk_free(req);
}
# 169 "../include/net/request_sock.h"
struct fastopen_queue {
 struct request_sock *rskq_rst_head;
 struct request_sock *rskq_rst_tail;



 spinlock_t lock;
 int qlen;
 int max_qlen;

 struct tcp_fastopen_context *ctx;
};
# 189 "../include/net/request_sock.h"
struct request_sock_queue {
 spinlock_t rskq_lock;
 u8 rskq_defer_accept;

 u32 synflood_warned;
 atomic_t qlen;
 atomic_t young;

 struct request_sock *rskq_accept_head;
 struct request_sock *rskq_accept_tail;
 struct fastopen_queue fastopenq;


};

void reqsk_queue_alloc(struct request_sock_queue *queue);

void reqsk_fastopen_remove(struct sock *sk, struct request_sock *req,
      bool reset);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool reqsk_queue_empty(const struct request_sock_queue *queue)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_490(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(queue->rskq_accept_head) == sizeof(char) || sizeof(queue->rskq_accept_head) == sizeof(short) || sizeof(queue->rskq_accept_head) == sizeof(int) || sizeof(queue->rskq_accept_head) == sizeof(long)) || sizeof(queue->rskq_accept_head) == sizeof(long long))) __compiletime_assert_490(); } while (0); (*(const volatile typeof( _Generic((queue->rskq_accept_head), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (queue->rskq_accept_head))) *)&(queue->rskq_accept_head)); }) == ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct request_sock *reqsk_queue_remove(struct request_sock_queue *queue,
            struct sock *parent)
{
 struct request_sock *req;

 spin_lock_bh(&queue->rskq_lock);
 req = queue->rskq_accept_head;
 if (req) {
  sk_acceptq_removed(parent);
  do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_491(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(queue->rskq_accept_head) == sizeof(char) || sizeof(queue->rskq_accept_head) == sizeof(short) || sizeof(queue->rskq_accept_head) == sizeof(int) || sizeof(queue->rskq_accept_head) == sizeof(long)) || sizeof(queue->rskq_accept_head) == sizeof(long long))) __compiletime_assert_491(); } while (0); do { *(volatile typeof(queue->rskq_accept_head) *)&(queue->rskq_accept_head) = (req->dl_next); } while (0); } while (0);
  if (queue->rskq_accept_head == ((void *)0))
   queue->rskq_accept_tail = ((void *)0);
 }
 spin_unlock_bh(&queue->rskq_lock);
 return req;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reqsk_queue_removed(struct request_sock_queue *queue,
           const struct request_sock *req)
{
 if (req->num_timeout == 0)
  atomic_dec(&queue->young);
 atomic_dec(&queue->qlen);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void reqsk_queue_added(struct request_sock_queue *queue)
{
 atomic_inc(&queue->young);
 atomic_inc(&queue->qlen);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int reqsk_queue_len(const struct request_sock_queue *queue)
{
 return atomic_read(&queue->qlen);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int reqsk_queue_len_young(const struct request_sock_queue *queue)
{
 return atomic_read(&queue->young);
}
# 263 "../include/net/request_sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 tcp_synack_window(const struct request_sock *req)
{
 return __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((req->__req_common.skc_rcv_wnd) - (65535U)) * 0l)) : (int *)8))), ((req->__req_common.skc_rcv_wnd) < (65535U) ? (req->__req_common.skc_rcv_wnd) : (65535U)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(req->__req_common.skc_rcv_wnd))(-1)) < ( typeof(req->__req_common.skc_rcv_wnd))1)) * 0l)) : (int *)8))), (((typeof(req->__req_common.skc_rcv_wnd))(-1)) < ( typeof(req->__req_common.skc_rcv_wnd))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(65535U))(-1)) < ( typeof(65535U))1)) * 0l)) : (int *)8))), (((typeof(65535U))(-1)) < ( typeof(65535U))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((req->__req_common.skc_rcv_wnd) + 0))(-1)) < ( typeof((req->__req_common.skc_rcv_wnd) + 0))1)) * 0l)) : (int *)8))), (((typeof((req->__req_common.skc_rcv_wnd) + 0))(-1)) < ( typeof((req->__req_common.skc_rcv_wnd) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((65535U) + 0))(-1)) < ( typeof((65535U) + 0))1)) * 0l)) : (int *)8))), (((typeof((65535U) + 0))(-1)) < ( typeof((65535U) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(req->__req_common.skc_rcv_wnd) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(req->__req_common.skc_rcv_wnd))(-1)) < ( typeof(req->__req_common.skc_rcv_wnd))1)) * 0l)) : (int *)8))), (((typeof(req->__req_common.skc_rcv_wnd))(-1)) < ( typeof(req->__req_common.skc_rcv_wnd))1), 0), req->__req_common.skc_rcv_wnd, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(65535U) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(65535U))(-1)) < ( typeof(65535U))1)) * 0l)) : (int *)8))), (((typeof(65535U))(-1)) < ( typeof(65535U))1), 0), 65535U, -1) >= 0)), "min" "(" "req->__req_common.skc_rcv_wnd" ", " "65535U" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_492 = (req->__req_common.skc_rcv_wnd); __auto_type __UNIQUE_ID_y_493 = (65535U); ((__UNIQUE_ID_x_492) < (__UNIQUE_ID_y_493) ? (__UNIQUE_ID_x_492) : (__UNIQUE_ID_y_493)); }); }));
}
# 24 "../include/net/inet_sock.h" 2
# 1 "../include/net/netns/hash.h" 1






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 net_hash_mix(const struct net *net)
{
 return net->hash_mix;
}
# 25 "../include/net/inet_sock.h" 2
# 39 "../include/net/inet_sock.h"
struct ip_options {
 __be32 faddr;
 __be32 nexthop;
 unsigned char optlen;
 unsigned char srr;
 unsigned char rr;
 unsigned char ts;
 unsigned char is_strictroute:1,
   srr_is_hit:1,
   is_changed:1,
   rr_needaddr:1,
   ts_needtime:1,
   ts_needaddr:1;
 unsigned char router_alert;
 unsigned char cipso;
 unsigned char __pad2;
 unsigned char __data[];
};

struct ip_options_rcu {
 struct callback_head rcu;
 struct ip_options opt;
};

struct ip_options_data {
 struct ip_options_rcu opt;
 char data[40];
};

struct inet_request_sock {
 struct request_sock req;
# 82 "../include/net/inet_sock.h"
 u16 snd_wscale : 4,
    rcv_wscale : 4,
    tstamp_ok : 1,
    sack_ok : 1,
    wscale_ok : 1,
    ecn_ok : 1,
    acked : 1,
    no_srccheck: 1,
    smc_ok : 1;
 u32 ir_mark;
 union {
  struct ip_options_rcu *ireq_opt;

  struct {
   struct ipv6_txoptions *ipv6_opt;
   struct sk_buff *pktopts;
  };

 };
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inet_request_sock *inet_rsk(const struct request_sock *sk)
{
 return (struct inet_request_sock *)sk;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb)
{
 u32 mark = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_494(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_mark) == sizeof(char) || sizeof(sk->sk_mark) == sizeof(short) || sizeof(sk->sk_mark) == sizeof(int) || sizeof(sk->sk_mark) == sizeof(long)) || sizeof(sk->sk_mark) == sizeof(long long))) __compiletime_assert_494(); } while (0); (*(const volatile typeof( _Generic((sk->sk_mark), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_mark))) *)&(sk->sk_mark)); });

 if (!mark && ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_495(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) == sizeof(char) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) == sizeof(short) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) == sizeof(int) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) == sizeof(long)) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) == sizeof(long long))) __compiletime_assert_495(); } while (0); (*(const volatile typeof( _Generic((sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept))) *)&(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)); }))
  return skb->mark;

 return mark;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_request_bound_dev_if(const struct sock *sk,
         struct sk_buff *skb)
{
 int bound_dev_if = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_496(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(char) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(short) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(int) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(long)) || sizeof(sk->__sk_common.skc_bound_dev_if) == sizeof(long long))) __compiletime_assert_496(); } while (0); (*(const volatile typeof( _Generic((sk->__sk_common.skc_bound_dev_if), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->__sk_common.skc_bound_dev_if))) *)&(sk->__sk_common.skc_bound_dev_if)); });







 return bound_dev_if;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_sk_bound_l3mdev(const struct sock *sk)
{
# 142 "../include/net/inet_sock.h"
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_bound_dev_eq(bool l3mdev_accept, int bound_dev_if,
         int dif, int sdif)
{
 if (!bound_dev_if)
  return !sdif || l3mdev_accept;
 return bound_dev_if == dif || bound_dev_if == sdif;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_sk_bound_dev_eq(struct net *net, int bound_dev_if,
     int dif, int sdif)
{




 return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);

}

struct inet_cork {
 unsigned int flags;
 __be32 addr;
 struct ip_options *opt;
 unsigned int fragsize;
 int length;
 struct dst_entry *dst;
 u8 tx_flags;
 __u8 ttl;
 __s16 tos;
 char priority;
 __u16 gso_size;
 u64 transmit_time;
 u32 mark;
};

struct inet_cork_full {
 struct inet_cork base;
 struct flowi fl;
};

struct ip_mc_socklist;
struct ipv6_pinfo;
struct rtable;
# 209 "../include/net/inet_sock.h"
struct inet_sock {

 struct sock sk;

 struct ipv6_pinfo *pinet6;







 unsigned long inet_flags;
 __be32 inet_saddr;
 __s16 uc_ttl;
 __be16 inet_sport;
 struct ip_options_rcu *inet_opt;
 atomic_t inet_id;

 __u8 tos;
 __u8 min_ttl;
 __u8 mc_ttl;
 __u8 pmtudisc;
 __u8 rcv_tos;
 __u8 convert_csum;
 int uc_index;
 int mc_index;
 __be32 mc_addr;
 u32 local_port_range;

 struct ip_mc_socklist *mc_list;
 struct inet_cork_full cork;
};



enum {
 INET_FLAGS_PKTINFO = 0,
 INET_FLAGS_TTL = 1,
 INET_FLAGS_TOS = 2,
 INET_FLAGS_RECVOPTS = 3,
 INET_FLAGS_RETOPTS = 4,
 INET_FLAGS_PASSSEC = 5,
 INET_FLAGS_ORIGDSTADDR = 6,
 INET_FLAGS_CHECKSUM = 7,
 INET_FLAGS_RECVFRAGSIZE = 8,

 INET_FLAGS_RECVERR = 9,
 INET_FLAGS_RECVERR_RFC4884 = 10,
 INET_FLAGS_FREEBIND = 11,
 INET_FLAGS_HDRINCL = 12,
 INET_FLAGS_MC_LOOP = 13,
 INET_FLAGS_MC_ALL = 14,
 INET_FLAGS_TRANSPARENT = 15,
 INET_FLAGS_IS_ICSK = 16,
 INET_FLAGS_NODEFRAG = 17,
 INET_FLAGS_BIND_ADDRESS_NO_PORT = 18,
 INET_FLAGS_DEFER_CONNECT = 19,
 INET_FLAGS_MC6_LOOP = 20,
 INET_FLAGS_RECVERR6_RFC4884 = 21,
 INET_FLAGS_MC6_ALL = 22,
 INET_FLAGS_AUTOFLOWLABEL_SET = 23,
 INET_FLAGS_AUTOFLOWLABEL = 24,
 INET_FLAGS_DONTFRAG = 25,
 INET_FLAGS_RECVERR6 = 26,
 INET_FLAGS_REPFLOW = 27,
 INET_FLAGS_RTALERT_ISOLATE = 28,
 INET_FLAGS_SNDFLOW = 29,
 INET_FLAGS_RTALERT = 30,
};
# 297 "../include/net/inet_sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long inet_cmsg_flags(const struct inet_sock *inet)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_497(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet->inet_flags) == sizeof(char) || sizeof(inet->inet_flags) == sizeof(short) || sizeof(inet->inet_flags) == sizeof(int) || sizeof(inet->inet_flags) == sizeof(long)) || sizeof(inet->inet_flags) == sizeof(long long))) __compiletime_assert_497(); } while (0); (*(const volatile typeof( _Generic((inet->inet_flags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet->inet_flags))) *)&(inet->inet_flags)); }) & (((((1UL))) << (INET_FLAGS_PKTINFO)) | ((((1UL))) << (INET_FLAGS_TTL)) | ((((1UL))) << (INET_FLAGS_TOS)) | ((((1UL))) << (INET_FLAGS_RECVOPTS)) | ((((1UL))) << (INET_FLAGS_RETOPTS)) | ((((1UL))) << (INET_FLAGS_PASSSEC)) | ((((1UL))) << (INET_FLAGS_ORIGDSTADDR)) | ((((1UL))) << (INET_FLAGS_CHECKSUM)) | ((((1UL))) << (INET_FLAGS_RECVFRAGSIZE)));
}
# 318 "../include/net/inet_sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *sk_to_full_sk(struct sock *sk)
{

 if (sk && sk->__sk_common.skc_state == TCP_NEW_SYN_RECV)
  sk = inet_reqsk(sk)->__req_common.skc_listener;

 return sk;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct sock *sk_const_to_full_sk(const struct sock *sk)
{

 if (sk && sk->__sk_common.skc_state == TCP_NEW_SYN_RECV)
  sk = ((const struct request_sock *)sk)->__req_common.skc_listener;

 return sk;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sock *skb_to_full_sk(const struct sk_buff *skb)
{
 return sk_to_full_sk(skb->sk);
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __inet_sk_copy_descendant(struct sock *sk_to,
          const struct sock *sk_from,
          const int ancestor_size)
{
 memcpy(_Generic(sk_to, const typeof(*(sk_to)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk_to); _Static_assert(__builtin_types_compatible_p(typeof(*(sk_to)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk_to)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk_to); _Static_assert(__builtin_types_compatible_p(typeof(*(sk_to)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk_to)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) ) + 1, _Generic(sk_from, const typeof(*(sk_from)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk_from); _Static_assert(__builtin_types_compatible_p(typeof(*(sk_from)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk_from)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk_from); _Static_assert(__builtin_types_compatible_p(typeof(*(sk_from)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk_from)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) ) + 1,
        sk_from->__sk_common.skc_prot->obj_size - ancestor_size);
}

int inet_sk_rebuild_header(struct sock *sk);
# 361 "../include/net/inet_sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_sk_state_load(const struct sock *sk)
{

 return ({ typeof( _Generic((*&sk->__sk_common.skc_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&sk->__sk_common.skc_state))) ___p1 = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_498(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(*&sk->__sk_common.skc_state) == sizeof(char) || sizeof(*&sk->__sk_common.skc_state) == sizeof(short) || sizeof(*&sk->__sk_common.skc_state) == sizeof(int) || sizeof(*&sk->__sk_common.skc_state) == sizeof(long)) || sizeof(*&sk->__sk_common.skc_state) == sizeof(long long))) __compiletime_assert_498(); } while (0); (*(const volatile typeof( _Generic((*&sk->__sk_common.skc_state), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (*&sk->__sk_common.skc_state))) *)&(*&sk->__sk_common.skc_state)); }); __asm__ __volatile__("": : :"memory"); (typeof(*&sk->__sk_common.skc_state))___p1; });
}
# 375 "../include/net/inet_sock.h"
void inet_sk_state_store(struct sock *sk, int newstate);

void inet_sk_set_state(struct sock *sk, int state);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __inet_ehashfn(const __be32 laddr,
       const __u16 lport,
       const __be32 faddr,
       const __be16 fport,
       u32 initval)
{
 return jhash_3words(( __u32) laddr,
       ( __u32) faddr,
       ((__u32) lport) << 16 | ( __u32)fport,
       initval);
}

struct request_sock *inet_reqsk_alloc(const struct request_sock_ops *ops,
          struct sock *sk_listener,
          bool attach_listener);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 inet_sk_flowi_flags(const struct sock *sk)
{
 __u8 flags = 0;

 if (((__builtin_constant_p(INET_FLAGS_TRANSPARENT) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags))) ? const_test_bit(INET_FLAGS_TRANSPARENT, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) : arch_test_bit(INET_FLAGS_TRANSPARENT, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags)) || ((__builtin_constant_p(INET_FLAGS_HDRINCL) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags))) ? const_test_bit(INET_FLAGS_HDRINCL, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) : arch_test_bit(INET_FLAGS_HDRINCL, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags)))
  flags |= 0x01;
 return flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_inc_convert_csum(struct sock *sk)
{
 _Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->convert_csum++;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_dec_convert_csum(struct sock *sk)
{
 if (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->convert_csum > 0)
  _Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->convert_csum--;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_get_convert_csum(struct sock *sk)
{
 return !!_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->convert_csum;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_can_nonlocal_bind(struct net *net,
       struct inet_sock *inet)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_499(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(net->ipv4.sysctl_ip_nonlocal_bind) == sizeof(char) || sizeof(net->ipv4.sysctl_ip_nonlocal_bind) == sizeof(short) || sizeof(net->ipv4.sysctl_ip_nonlocal_bind) == sizeof(int) || sizeof(net->ipv4.sysctl_ip_nonlocal_bind) == sizeof(long)) || sizeof(net->ipv4.sysctl_ip_nonlocal_bind) == sizeof(long long))) __compiletime_assert_499(); } while (0); (*(const volatile typeof( _Generic((net->ipv4.sysctl_ip_nonlocal_bind), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (net->ipv4.sysctl_ip_nonlocal_bind))) *)&(net->ipv4.sysctl_ip_nonlocal_bind)); }) ||
  ((__builtin_constant_p(INET_FLAGS_FREEBIND) && __builtin_constant_p((uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&inet->inet_flags))) ? const_test_bit(INET_FLAGS_FREEBIND, &inet->inet_flags) : arch_test_bit(INET_FLAGS_FREEBIND, &inet->inet_flags)) ||
  ((__builtin_constant_p(INET_FLAGS_TRANSPARENT) && __builtin_constant_p((uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&inet->inet_flags))) ? const_test_bit(INET_FLAGS_TRANSPARENT, &inet->inet_flags) : arch_test_bit(INET_FLAGS_TRANSPARENT, &inet->inet_flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_addr_valid_or_nonlocal(struct net *net,
            struct inet_sock *inet,
            __be32 addr,
            int addr_type)
{
 return inet_can_nonlocal_bind(net, inet) ||
  addr == (( __be32)(__u32)(__builtin_constant_p((((unsigned long int) 0x00000000))) ? ((__u32)( (((__u32)((((unsigned long int) 0x00000000))) & (__u32)0x000000ffUL) << 24) | (((__u32)((((unsigned long int) 0x00000000))) & (__u32)0x0000ff00UL) << 8) | (((__u32)((((unsigned long int) 0x00000000))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((((unsigned long int) 0x00000000))) & (__u32)0xff000000UL) >> 24))) : __fswab32((((unsigned long int) 0x00000000))))) ||
  addr_type == RTN_LOCAL ||
  addr_type == RTN_MULTICAST ||
  addr_type == RTN_BROADCAST;
}
# 22 "../include/net/inet_connection_sock.h" 2





struct inet_bind_bucket;
struct inet_bind2_bucket;
struct tcp_congestion_ops;





struct inet_connection_sock_af_ops {
 int (*queue_xmit)(struct sock *sk, struct sk_buff *skb, struct flowi *fl);
 void (*send_check)(struct sock *sk, struct sk_buff *skb);
 int (*rebuild_header)(struct sock *sk);
 void (*sk_rx_dst_set)(struct sock *sk, const struct sk_buff *skb);
 int (*conn_request)(struct sock *sk, struct sk_buff *skb);
 struct sock *(*syn_recv_sock)(const struct sock *sk, struct sk_buff *skb,
          struct request_sock *req,
          struct dst_entry *dst,
          struct request_sock *req_unhash,
          bool *own_req);
 u16 net_header_len;
 u16 sockaddr_len;
 int (*setsockopt)(struct sock *sk, int level, int optname,
      sockptr_t optval, unsigned int optlen);
 int (*getsockopt)(struct sock *sk, int level, int optname,
      char *optval, int *optlen);
 void (*addr2sockaddr)(struct sock *sk, struct sockaddr *);
 void (*mtu_reduced)(struct sock *sk);
};
# 82 "../include/net/inet_connection_sock.h"
struct inet_connection_sock {

 struct inet_sock icsk_inet;
 struct request_sock_queue icsk_accept_queue;
 struct inet_bind_bucket *icsk_bind_hash;
 struct inet_bind2_bucket *icsk_bind2_hash;
 unsigned long icsk_timeout;
  struct timer_list icsk_retransmit_timer;
  struct timer_list icsk_delack_timer;
 __u32 icsk_rto;
 __u32 icsk_rto_min;
 __u32 icsk_delack_max;
 __u32 icsk_pmtu_cookie;
 const struct tcp_congestion_ops *icsk_ca_ops;
 const struct inet_connection_sock_af_ops *icsk_af_ops;
 const struct tcp_ulp_ops *icsk_ulp_ops;
 void *icsk_ulp_data;
 void (*icsk_clean_acked)(struct sock *sk, u32 acked_seq);
 unsigned int (*icsk_sync_mss)(struct sock *sk, u32 pmtu);
 __u8 icsk_ca_state:5,
      icsk_ca_initialized:1,
      icsk_ca_setsockopt:1,
      icsk_ca_dst_locked:1;
 __u8 icsk_retransmits;
 __u8 icsk_pending;
 __u8 icsk_backoff;
 __u8 icsk_syn_retries;
 __u8 icsk_probes_out;
 __u16 icsk_ext_hdr_len;
 struct {
  __u8 pending;
  __u8 quick;
  __u8 pingpong;
  __u8 retry;

  __u32 ato:8,
      lrcv_flowlabel:20,
      unused:4;
  unsigned long timeout;
  __u32 lrcvtime;
  __u16 last_seg_size;
  __u16 rcv_mss;
 } icsk_ack;
 struct {

  int search_high;
  int search_low;


  u32 probe_size:31,

      enabled:1;

  u32 probe_timestamp;
 } icsk_mtup;
 u32 icsk_probes_tstamp;
 u32 icsk_user_timeout;

 u64 icsk_ca_priv[104 / sizeof(u64)];

};
# 152 "../include/net/inet_connection_sock.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *inet_csk_ca(const struct sock *sk)
{
 return (void *)_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ca_priv;
}

struct sock *inet_csk_clone_lock(const struct sock *sk,
     const struct request_sock *req,
     const gfp_t priority);

enum inet_csk_ack_state_t {
 ICSK_ACK_SCHED = 1,
 ICSK_ACK_TIMER = 2,
 ICSK_ACK_PUSHED = 4,
 ICSK_ACK_PUSHED2 = 8,
 ICSK_ACK_NOW = 16,
 ICSK_ACK_NOMEM = 32,
};

void inet_csk_init_xmit_timers(struct sock *sk,
          void (*retransmit_handler)(struct timer_list *),
          void (*delack_handler)(struct timer_list *),
          void (*keepalive_handler)(struct timer_list *));
void inet_csk_clear_xmit_timers(struct sock *sk);
void inet_csk_clear_xmit_timers_sync(struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_schedule_ack(struct sock *sk)
{
 _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack.pending |= ICSK_ACK_SCHED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_csk_ack_scheduled(const struct sock *sk)
{
 return _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack.pending & ICSK_ACK_SCHED;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_delack_init(struct sock *sk)
{
 memset(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack, 0, sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack));
}

void inet_csk_delete_keepalive_timer(struct sock *sk);
void inet_csk_reset_keepalive_timer(struct sock *sk, unsigned long timeout);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_clear_xmit_timer(struct sock *sk, const int what)
{
 struct inet_connection_sock *icsk = _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) );

 if (what == 1 || what == 3) {
  icsk->icsk_pending = 0;



 } else if (what == 2) {
  icsk->icsk_ack.pending = 0;
  icsk->icsk_ack.retry = 0;



 } else {
  ({ if (0) _printk("\001" "7" "inet_csk BUG: unknown timer value\n"); 0; });
 }
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_reset_xmit_timer(struct sock *sk, const int what,
          unsigned long when,
          const unsigned long max_when)
{
 struct inet_connection_sock *icsk = _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) );

 if (when > max_when) {
  ({ if (0) _printk("\001" "7" "reset_xmit_timer: sk=%p %d when=0x%lx, caller=%p\n", sk, what, when, (void *)({ __label__ __here; __here: (unsigned long)&&__here; })); 0; });

  when = max_when;
 }

 if (what == 1 || what == 3 ||
     what == 5 || what == 6) {
  icsk->icsk_pending = what;
  icsk->icsk_timeout = jiffies + when;
  sk_reset_timer(sk, &icsk->icsk_retransmit_timer, icsk->icsk_timeout);
 } else if (what == 2) {
  icsk->icsk_ack.pending |= ICSK_ACK_TIMER;
  icsk->icsk_ack.timeout = jiffies + when;
  sk_reset_timer(sk, &icsk->icsk_delack_timer, icsk->icsk_ack.timeout);
 } else {
  ({ if (0) _printk("\001" "7" "inet_csk BUG: unknown timer value\n"); 0; });
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
inet_csk_rto_backoff(const struct inet_connection_sock *icsk,
       unsigned long max_when)
{
        u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;

        return (unsigned long)({ u64 __UNIQUE_ID_x_500 = (when); u64 __UNIQUE_ID_y_501 = (max_when); ((__UNIQUE_ID_x_500) < (__UNIQUE_ID_y_501) ? (__UNIQUE_ID_x_500) : (__UNIQUE_ID_y_501)); });
}

struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg);

int inet_csk_get_port(struct sock *sk, unsigned short snum);

struct dst_entry *inet_csk_route_req(const struct sock *sk, struct flowi4 *fl4,
         const struct request_sock *req);
struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
         struct sock *newsk,
         const struct request_sock *req);

struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
          struct request_sock *req,
          struct sock *child);
bool inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req,
       unsigned long timeout);
struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
      struct request_sock *req,
      bool own_req);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_reqsk_queue_added(struct sock *sk)
{
 reqsk_queue_added(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_accept_queue);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_csk_reqsk_queue_len(const struct sock *sk)
{
 return reqsk_queue_len(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_accept_queue);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_csk_reqsk_queue_is_full(const struct sock *sk)
{
 return inet_csk_reqsk_queue_len(sk) >= ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_502(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_max_ack_backlog) == sizeof(char) || sizeof(sk->sk_max_ack_backlog) == sizeof(short) || sizeof(sk->sk_max_ack_backlog) == sizeof(int) || sizeof(sk->sk_max_ack_backlog) == sizeof(long)) || sizeof(sk->sk_max_ack_backlog) == sizeof(long long))) __compiletime_assert_502(); } while (0); (*(const volatile typeof( _Generic((sk->sk_max_ack_backlog), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_max_ack_backlog))) *)&(sk->sk_max_ack_backlog)); });
}

bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long
reqsk_timeout(struct request_sock *req, unsigned long max_timeout)
{
 u64 timeout = (u64)req->timeout << req->num_timeout;

 return (unsigned long)({ u64 __UNIQUE_ID_x_503 = (timeout); u64 __UNIQUE_ID_y_504 = (max_timeout); ((__UNIQUE_ID_x_503) < (__UNIQUE_ID_y_504) ? (__UNIQUE_ID_x_503) : (__UNIQUE_ID_y_504)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_prepare_for_destroy_sock(struct sock *sk)
{

 sock_set_flag(sk, SOCK_DEAD);
 do { do { const void *__vpp_verify = (typeof((&(*sk->__sk_common.skc_prot->orphan_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); switch(sizeof(*sk->__sk_common.skc_prot->orphan_count)) { case 1: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sk->__sk_common.skc_prot->orphan_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sk->__sk_common.skc_prot->orphan_count))) *)(&(*sk->__sk_common.skc_prot->orphan_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 2: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sk->__sk_common.skc_prot->orphan_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sk->__sk_common.skc_prot->orphan_count))) *)(&(*sk->__sk_common.skc_prot->orphan_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 4: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sk->__sk_common.skc_prot->orphan_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sk->__sk_common.skc_prot->orphan_count))) *)(&(*sk->__sk_common.skc_prot->orphan_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; case 8: do { unsigned long __flags; do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); __flags = arch_local_irq_save(); } while (0); do { *({ (void)(0); ({ do { const void *__vpp_verify = (typeof((&(*sk->__sk_common.skc_prot->orphan_count)) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(&(*sk->__sk_common.skc_prot->orphan_count))) *)(&(*sk->__sk_common.skc_prot->orphan_count)); }); }) += 1; } while (0); do { ({ unsigned long __dummy; typeof(__flags) __dummy2; (void)(&__dummy == &__dummy2); 1; }); do { if (__builtin_expect(!!(!arch_irqs_disabled()), 0)) warn_bogus_irq_restore(); } while (0); arch_local_irq_restore(__flags); } while (0); } while (0);break; default: __bad_size_call_parameter();break; } } while (0);
}

void inet_csk_destroy_sock(struct sock *sk);
void inet_csk_prepare_forced_close(struct sock *sk);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __poll_t inet_csk_listen_poll(const struct sock *sk)
{
 return !reqsk_queue_empty(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_accept_queue) ?
   (( __poll_t)0x00000001 | ( __poll_t)0x00000040) : 0;
}

int inet_csk_listen_start(struct sock *sk);
void inet_csk_listen_stop(struct sock *sk);

void inet_csk_addr2sockaddr(struct sock *sk, struct sockaddr *uaddr);


void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
          struct sock *sk);

struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_enter_pingpong_mode(struct sock *sk)
{
 _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack.pingpong =
  ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_505(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(char) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(short) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(int) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(long)) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(long long))) __compiletime_assert_505(); } while (0); (*(const volatile typeof( _Generic((sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh))) *)&(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_exit_pingpong_mode(struct sock *sk)
{
 _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack.pingpong = 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_csk_in_pingpong_mode(struct sock *sk)
{
 return _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ack.pingpong >=
        ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_506(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(char) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(short) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(int) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(long)) || sizeof(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh) == sizeof(long long))) __compiletime_assert_506(); } while (0); (*(const volatile typeof( _Generic((sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh))) *)&(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh)); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_csk_inc_pingpong_cnt(struct sock *sk)
{
 struct inet_connection_sock *icsk = _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) );

 if (icsk->icsk_ack.pingpong < ((u8)~0U))
  icsk->icsk_ack.pingpong++;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_csk_has_ulp(const struct sock *sk)
{
 return ((__builtin_constant_p(INET_FLAGS_IS_ICSK) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags))) ? const_test_bit(INET_FLAGS_IS_ICSK, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) : arch_test_bit(INET_FLAGS_IS_ICSK, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags)) && !!_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_ulp_ops;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_init_csk_locks(struct sock *sk)
{
 struct inet_connection_sock *icsk = _Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) );

 do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&icsk->icsk_accept_queue.rskq_lock), "&icsk->icsk_accept_queue.rskq_lock", &__key, LD_WAIT_CONFIG); } while (0);
 do { static struct lock_class_key __key; __raw_spin_lock_init(spinlock_check(&icsk->icsk_accept_queue.fastopenq.lock), "&icsk->icsk_accept_queue.fastopenq.lock", &__key, LD_WAIT_CONFIG); } while (0);
}
# 21 "../include/linux/tcp.h" 2
# 1 "../include/net/inet_timewait_sock.h" 1
# 22 "../include/net/inet_timewait_sock.h"
# 1 "../include/net/timewait_sock.h" 1
# 14 "../include/net/timewait_sock.h"
struct timewait_sock_ops {
 struct kmem_cache *twsk_slab;
 char *twsk_slab_name;
 unsigned int twsk_obj_size;
 void (*twsk_destructor)(struct sock *sk);
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void twsk_destructor(struct sock *sk)
{
 if (sk->__sk_common.skc_prot->twsk_prot->twsk_destructor != ((void *)0))
  sk->__sk_common.skc_prot->twsk_prot->twsk_destructor(sk);
}
# 23 "../include/net/inet_timewait_sock.h" 2



struct inet_bind_bucket;






struct inet_timewait_sock {




 struct sock_common __tw_common;
# 60 "../include/net/inet_timewait_sock.h"
 __u32 tw_mark;
 volatile unsigned char tw_substate;
 unsigned char tw_rcv_wscale;



 __be16 tw_sport;

 unsigned int tw_transparent : 1,
    tw_flowlabel : 20,
    tw_usec_ts : 1,
    tw_pad : 2,
    tw_tos : 8;
 u32 tw_txhash;
 u32 tw_priority;
 struct timer_list tw_timer;
 struct inet_bind_bucket *tw_tb;
 struct inet_bind2_bucket *tw_tb2;
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inet_timewait_sock *inet_twsk(const struct sock *sk)
{
 return (struct inet_timewait_sock *)sk;
}

void inet_twsk_free(struct inet_timewait_sock *tw);
void inet_twsk_put(struct inet_timewait_sock *tw);

void inet_twsk_bind_unhash(struct inet_timewait_sock *tw,
      struct inet_hashinfo *hashinfo);

struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk,
        struct inet_timewait_death_row *dr,
        const int state);

void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
      struct sock *sk,
      struct inet_hashinfo *hashinfo,
      int timeo);

void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo,
     bool rearm);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_twsk_reschedule(struct inet_timewait_sock *tw, int timeo)
{
 __inet_twsk_schedule(tw, timeo, true);
}

void inet_twsk_deschedule_put(struct inet_timewait_sock *tw);

void inet_twsk_purge(struct inet_hashinfo *hashinfo);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct net *twsk_net(const struct inet_timewait_sock *twsk)
{
 return read_pnet(&twsk->__tw_common.skc_net);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void twsk_net_set(struct inet_timewait_sock *twsk, struct net *net)
{
 write_pnet(&twsk->__tw_common.skc_net, net);
}
# 22 "../include/linux/tcp.h" 2
# 1 "../include/uapi/linux/tcp.h" 1
# 25 "../include/uapi/linux/tcp.h"
struct tcphdr {
 __be16 source;
 __be16 dest;
 __be32 seq;
 __be32 ack_seq;

 __u16 res1:4,
  doff:4,
  fin:1,
  syn:1,
  rst:1,
  psh:1,
  ack:1,
  urg:1,
  ece:1,
  cwr:1;
# 55 "../include/uapi/linux/tcp.h"
 __be16 window;
 __sum16 check;
 __be16 urg_ptr;
};






union tcp_word_hdr {
 struct tcphdr hdr;
 __be32 words[5];
};



enum {
 TCP_FLAG_CWR = (( __be32)((__u32)( (((__u32)((0x00800000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00800000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00800000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00800000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_ECE = (( __be32)((__u32)( (((__u32)((0x00400000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00400000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00400000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00400000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_URG = (( __be32)((__u32)( (((__u32)((0x00200000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00200000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00200000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00200000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_ACK = (( __be32)((__u32)( (((__u32)((0x00100000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00100000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00100000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00100000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_PSH = (( __be32)((__u32)( (((__u32)((0x00080000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00080000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00080000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00080000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_RST = (( __be32)((__u32)( (((__u32)((0x00040000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00040000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00040000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00040000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_SYN = (( __be32)((__u32)( (((__u32)((0x00020000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00020000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00020000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00020000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_FLAG_FIN = (( __be32)((__u32)( (((__u32)((0x00010000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00010000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00010000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00010000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_RESERVED_BITS = (( __be32)((__u32)( (((__u32)((0x0F000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0F000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0F000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0F000000)) & (__u32)0xff000000UL) >> 24)))),
 TCP_DATA_OFFSET = (( __be32)((__u32)( (((__u32)((0xF0000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xF0000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xF0000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xF0000000)) & (__u32)0xff000000UL) >> 24))))
};
# 144 "../include/uapi/linux/tcp.h"
struct tcp_repair_opt {
 __u32 opt_code;
 __u32 opt_val;
};

struct tcp_repair_window {
 __u32 snd_wl1;
 __u32 snd_wnd;
 __u32 max_window;

 __u32 rcv_wnd;
 __u32 rcv_wup;
};

enum {
 TCP_NO_QUEUE,
 TCP_RECV_QUEUE,
 TCP_SEND_QUEUE,
 TCP_QUEUES_NR,
};


enum tcp_fastopen_client_fail {
 TFO_STATUS_UNSPEC,
 TFO_COOKIE_UNAVAILABLE,
 TFO_DATA_NOT_ACKED,
 TFO_SYN_RETRANSMITTED,
};
# 187 "../include/uapi/linux/tcp.h"
enum tcp_ca_state {




 TCP_CA_Open = 0,







 TCP_CA_Disorder = 1,






 TCP_CA_CWR = 2,





 TCP_CA_Recovery = 3,




 TCP_CA_Loss = 4

};

struct tcp_info {
 __u8 tcpi_state;
 __u8 tcpi_ca_state;
 __u8 tcpi_retransmits;
 __u8 tcpi_probes;
 __u8 tcpi_backoff;
 __u8 tcpi_options;
 __u8 tcpi_snd_wscale : 4, tcpi_rcv_wscale : 4;
 __u8 tcpi_delivery_rate_app_limited:1, tcpi_fastopen_client_fail:2;

 __u32 tcpi_rto;
 __u32 tcpi_ato;
 __u32 tcpi_snd_mss;
 __u32 tcpi_rcv_mss;

 __u32 tcpi_unacked;
 __u32 tcpi_sacked;
 __u32 tcpi_lost;
 __u32 tcpi_retrans;
 __u32 tcpi_fackets;


 __u32 tcpi_last_data_sent;
 __u32 tcpi_last_ack_sent;
 __u32 tcpi_last_data_recv;
 __u32 tcpi_last_ack_recv;


 __u32 tcpi_pmtu;
 __u32 tcpi_rcv_ssthresh;
 __u32 tcpi_rtt;
 __u32 tcpi_rttvar;
 __u32 tcpi_snd_ssthresh;
 __u32 tcpi_snd_cwnd;
 __u32 tcpi_advmss;
 __u32 tcpi_reordering;

 __u32 tcpi_rcv_rtt;
 __u32 tcpi_rcv_space;

 __u32 tcpi_total_retrans;

 __u64 tcpi_pacing_rate;
 __u64 tcpi_max_pacing_rate;
 __u64 tcpi_bytes_acked;
 __u64 tcpi_bytes_received;
 __u32 tcpi_segs_out;
 __u32 tcpi_segs_in;

 __u32 tcpi_notsent_bytes;
 __u32 tcpi_min_rtt;
 __u32 tcpi_data_segs_in;
 __u32 tcpi_data_segs_out;

 __u64 tcpi_delivery_rate;

 __u64 tcpi_busy_time;
 __u64 tcpi_rwnd_limited;
 __u64 tcpi_sndbuf_limited;

 __u32 tcpi_delivered;
 __u32 tcpi_delivered_ce;

 __u64 tcpi_bytes_sent;
 __u64 tcpi_bytes_retrans;
 __u32 tcpi_dsack_dups;
 __u32 tcpi_reord_seen;

 __u32 tcpi_rcv_ooopack;

 __u32 tcpi_snd_wnd;


 __u32 tcpi_rcv_wnd;



 __u32 tcpi_rehash;

 __u16 tcpi_total_rto;


 __u16 tcpi_total_rto_recoveries;



 __u32 tcpi_total_rto_time;



};


enum {
 TCP_NLA_PAD,
 TCP_NLA_BUSY,
 TCP_NLA_RWND_LIMITED,
 TCP_NLA_SNDBUF_LIMITED,
 TCP_NLA_DATA_SEGS_OUT,
 TCP_NLA_TOTAL_RETRANS,
 TCP_NLA_PACING_RATE,
 TCP_NLA_DELIVERY_RATE,
 TCP_NLA_SND_CWND,
 TCP_NLA_REORDERING,
 TCP_NLA_MIN_RTT,
 TCP_NLA_RECUR_RETRANS,
 TCP_NLA_DELIVERY_RATE_APP_LMT,
 TCP_NLA_SNDQ_SIZE,
 TCP_NLA_CA_STATE,
 TCP_NLA_SND_SSTHRESH,
 TCP_NLA_DELIVERED,
 TCP_NLA_DELIVERED_CE,
 TCP_NLA_BYTES_SENT,
 TCP_NLA_BYTES_RETRANS,
 TCP_NLA_DSACK_DUPS,
 TCP_NLA_REORD_SEEN,
 TCP_NLA_SRTT,
 TCP_NLA_TIMEOUT_REHASH,
 TCP_NLA_BYTES_NOTSENT,
 TCP_NLA_EDT,
 TCP_NLA_TTL,
 TCP_NLA_REHASH,
};
# 353 "../include/uapi/linux/tcp.h"
struct tcp_md5sig {
 struct __kernel_sockaddr_storage tcpm_addr;
 __u8 tcpm_flags;
 __u8 tcpm_prefixlen;
 __u16 tcpm_keylen;
 int tcpm_ifindex;
 __u8 tcpm_key[80];
};


struct tcp_diag_md5sig {
 __u8 tcpm_family;
 __u8 tcpm_prefixlen;
 __u16 tcpm_keylen;
 __be32 tcpm_addr[4];
 __u8 tcpm_key[80];
};
# 380 "../include/uapi/linux/tcp.h"
struct tcp_ao_add {
 struct __kernel_sockaddr_storage addr;
 char alg_name[64];
 __s32 ifindex;
 __u32 set_current :1,
  set_rnext :1,
  reserved :30;
 __u16 reserved2;
 __u8 prefix;
 __u8 sndid;
 __u8 rcvid;
 __u8 maclen;
 __u8 keyflags;
 __u8 keylen;
 __u8 key[80];
} __attribute__((aligned(8)));

struct tcp_ao_del {
 struct __kernel_sockaddr_storage addr;
 __s32 ifindex;
 __u32 set_current :1,
  set_rnext :1,
  del_async :1,
  reserved :29;
 __u16 reserved2;
 __u8 prefix;
 __u8 sndid;
 __u8 rcvid;
 __u8 current_key;
 __u8 rnext;
 __u8 keyflags;
} __attribute__((aligned(8)));

struct tcp_ao_info_opt {

 __u32 set_current :1,
  set_rnext :1,
  ao_required :1,
  set_counters :1,
  accept_icmps :1,
  reserved :27;
 __u16 reserved2;
 __u8 current_key;
 __u8 rnext;
 __u64 pkt_good;
 __u64 pkt_bad;
 __u64 pkt_key_not_found;
 __u64 pkt_ao_required;
 __u64 pkt_dropped_icmp;
} __attribute__((aligned(8)));

struct tcp_ao_getsockopt {
 struct __kernel_sockaddr_storage addr;


 char alg_name[64];
 __u8 key[80];
 __u32 nkeys;




 __u16 is_current :1,



  is_rnext :1,


  get_all :1,
  reserved :13;
 __u8 sndid;
 __u8 rcvid;
 __u8 prefix;
 __u8 maclen;


 __u8 keyflags;
 __u8 keylen;
 __s32 ifindex;
 __u64 pkt_good;
 __u64 pkt_bad;
} __attribute__((aligned(8)));

struct tcp_ao_repair {
 __be32 snt_isn;
 __be32 rcv_isn;
 __u32 snd_sne;
 __u32 rcv_sne;
} __attribute__((aligned(8)));




struct tcp_zerocopy_receive {
 __u64 address;
 __u32 length;
 __u32 recv_skip_hint;
 __u32 inq;
 __s32 err;
 __u64 copybuf_address;
 __s32 copybuf_len;
 __u32 flags;
 __u64 msg_control;
 __u64 msg_controllen;
 __u32 msg_flags;
 __u32 reserved;
};
# 23 "../include/linux/tcp.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct tcphdr *tcp_hdr(const struct sk_buff *skb)
{
 return (struct tcphdr *)skb_transport_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int __tcp_hdrlen(const struct tcphdr *th)
{
 return th->doff * 4;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int tcp_hdrlen(const struct sk_buff *skb)
{
 return __tcp_hdrlen(tcp_hdr(skb));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct tcphdr *inner_tcp_hdr(const struct sk_buff *skb)
{
 return (struct tcphdr *)skb_inner_transport_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int inner_tcp_hdrlen(const struct sk_buff *skb)
{
 return inner_tcp_hdr(skb)->doff * 4;
}
# 59 "../include/linux/tcp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_tcp_all_headers(const struct sk_buff *skb)
{
 return skb_transport_offset(skb) + tcp_hdrlen(skb);
}
# 74 "../include/linux/tcp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int skb_inner_tcp_all_headers(const struct sk_buff *skb)
{
 return skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int tcp_optlen(const struct sk_buff *skb)
{
 return (tcp_hdr(skb)->doff - 5) * 4;
}







struct tcp_fastopen_cookie {
 __le64 val[(((16) + (sizeof(u64)) - 1) / (sizeof(u64)))];
 s8 len;
 bool exp;
};


struct tcp_sack_block_wire {
 __be32 start_seq;
 __be32 end_seq;
};

struct tcp_sack_block {
 u32 start_seq;
 u32 end_seq;
};





struct tcp_options_received {

 int ts_recent_stamp;
 u32 ts_recent;
 u32 rcv_tsval;
 u32 rcv_tsecr;
 u16 saw_tstamp : 1,
  tstamp_ok : 1,
  dsack : 1,
  wscale_ok : 1,
  sack_ok : 3,
  smc_ok : 1,
  snd_wscale : 4,
  rcv_wscale : 4;
 u8 saw_unknown:1,
  unused:7;
 u8 num_sacks;
 u16 user_mss;
 u16 mss_clamp;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tcp_clear_options(struct tcp_options_received *rx_opt)
{
 rx_opt->tstamp_ok = rx_opt->sack_ok = 0;
 rx_opt->wscale_ok = rx_opt->snd_wscale = 0;

 rx_opt->smc_ok = 0;

}







struct tcp_request_sock_ops;

struct tcp_request_sock {
 struct inet_request_sock req;
 const struct tcp_request_sock_ops *af_specific;
 u64 snt_synack;
 bool tfo_listener;
 bool is_mptcp;
 bool req_usec_ts;



 u32 txhash;
 u32 rcv_isn;
 u32 snt_isn;
 u32 ts_off;
 u32 last_oow_ack_time;
 u32 rcv_nxt;



 u8 syn_tos;





};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct tcp_request_sock *tcp_rsk(const struct request_sock *req)
{
 return (struct tcp_request_sock *)req;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool tcp_rsk_used_ao(const struct request_sock *req)
{

 return false;



}



struct tcp_sock {






 struct inet_connection_sock inet_conn;


 __u8 __cacheline_group_begin__tcp_sock_read_tx[0];

 u32 max_window;
 u32 rcv_ssthresh;
 u32 reordering;
 u32 notsent_lowat;
 u16 gso_segs;

 struct sk_buff *lost_skb_hint;
 struct sk_buff *retransmit_skb_hint;
 __u8 __cacheline_group_end__tcp_sock_read_tx[0];


 __u8 __cacheline_group_begin__tcp_sock_read_txrx[0];
 u32 tsoffset;
 u32 snd_wnd;
 u32 mss_cache;
 u32 snd_cwnd;
 u32 prr_out;
 u32 lost_out;
 u32 sacked_out;
 u16 tcp_header_len;
 u8 scaling_ratio;
 u8 chrono_type : 2,
  repair : 1,
  tcp_usec_ts : 1,
  is_sack_reneg:1,
  is_cwnd_limited:1;
 __u8 __cacheline_group_end__tcp_sock_read_txrx[0];


 __u8 __cacheline_group_begin__tcp_sock_read_rx[0];
 u32 copied_seq;
 u32 rcv_tstamp;
 u32 snd_wl1;
 u32 tlp_high_seq;
 u32 rttvar_us;
 u32 retrans_out;
 u16 advmss;
 u16 urg_data;
 u32 lost;
 struct minmax rtt_min;

 struct rb_root out_of_order_queue;
 u32 snd_ssthresh;
 u8 recvmsg_inq : 1;
 __u8 __cacheline_group_end__tcp_sock_read_rx[0];


 __u8 __cacheline_group_begin__tcp_sock_write_tx[0] __attribute__((__aligned__((1 << (5)))));
 u32 segs_out;


 u32 data_segs_out;


 u64 bytes_sent;


 u32 snd_sml;
 u32 chrono_start;
 u32 chrono_stat[3];
 u32 write_seq;
 u32 pushed_seq;
 u32 lsndtime;
 u32 mdev_us;
 u32 rtt_seq;
 u64 tcp_wstamp_ns;
 struct list_head tsorted_sent_queue;
 struct sk_buff *highest_sack;




 u8 ecn_flags;
 __u8 __cacheline_group_end__tcp_sock_write_tx[0];


 __u8 __cacheline_group_begin__tcp_sock_write_txrx[0];




 __be32 pred_flags;
 u64 tcp_clock_cache;
 u64 tcp_mstamp;
 u32 rcv_nxt;
 u32 snd_nxt;
 u32 snd_una;
 u32 window_clamp;
 u32 srtt_us;
 u32 packets_out;
 u32 snd_up;
 u32 delivered;
 u32 delivered_ce;
 u32 app_limited;
 u32 rcv_wnd;



 struct tcp_options_received rx_opt;
 u8 nonagle : 4,
  rate_app_limited:1;
 __u8 __cacheline_group_end__tcp_sock_write_txrx[0];


 __u8 __cacheline_group_begin__tcp_sock_write_rx[0] __attribute__((__aligned__(8)));
 u64 bytes_received;




 u32 segs_in;


 u32 data_segs_in;


 u32 rcv_wup;
 u32 max_packets_out;
 u32 cwnd_usage_seq;
 u32 rate_delivered;
 u32 rate_interval_us;
 u32 rcv_rtt_last_tsecr;
 u64 first_tx_mstamp;
 u64 delivered_mstamp;
 u64 bytes_acked;



 struct {
  u32 rtt_us;
  u32 seq;
  u64 time;
 } rcv_rtt_est;

 struct {
  u32 space;
  u32 seq;
  u64 time;
 } rcvq_space;
 __u8 __cacheline_group_end__tcp_sock_write_rx[0];







 u32 dsack_dups;


 u32 compressed_ack_rcv_nxt;
 struct list_head tsq_node;


 struct tcp_rack {
  u64 mstamp;
  u32 rtt_us;
  u32 end_seq;
  u32 last_delivered;
  u8 reo_wnd_steps;

  u8 reo_wnd_persist:5,
     dsack_seen:1,
     advanced:1;
 } rack;
 u8 compressed_ack;
 u8 dup_ack_counter:2,
  tlp_retrans:1,
  unused:5;
 u8 thin_lto : 1,
  fastopen_connect:1,
  fastopen_no_cookie:1,
  fastopen_client_fail:2,
  frto : 1;
 u8 repair_queue;
 u8 save_syn:2,
  syn_data:1,
  syn_fastopen:1,
  syn_fastopen_exp:1,
  syn_fastopen_ch:1,
  syn_data_acked:1;

 u8 keepalive_probes;
 u32 tcp_tx_delay;


 u32 mdev_max_us;

 u32 reord_seen;




 u32 snd_cwnd_cnt;
 u32 snd_cwnd_clamp;
 u32 snd_cwnd_used;
 u32 snd_cwnd_stamp;
 u32 prior_cwnd;
 u32 prr_delivered;

 u32 last_oow_ack_time;

 struct hrtimer pacing_timer;
 struct hrtimer compressed_ack_timer;

 struct sk_buff *ooo_last_skb;


 struct tcp_sack_block duplicate_sack[1];
 struct tcp_sack_block selective_acks[4];

 struct tcp_sack_block recv_sack_cache[4];

 int lost_cnt_hint;

 u32 prior_ssthresh;
 u32 high_seq;

 u32 retrans_stamp;


 u32 undo_marker;
 int undo_retrans;
 u64 bytes_retrans;


 u32 total_retrans;
 u32 rto_stamp;
 u16 total_rto;


 u16 total_rto_recoveries;


 u32 total_rto_time;

 u32 urg_seq;
 unsigned int keepalive_time;
 unsigned int keepalive_intvl;

 int linger2;




 u8 bpf_sock_ops_cb_flags;


 u8 bpf_chg_cc_inprogress:1;
# 463 "../include/linux/tcp.h"
 u16 timeout_rehash;

 u32 rcv_ooopack;


 struct {
  u32 probe_seq_start;
  u32 probe_seq_end;
 } mtu_probe;
 u32 plb_rehash;
 u32 mtu_info;






 bool syn_smc;
 bool (*smc_hs_congested)(const struct sock *sk);
# 498 "../include/linux/tcp.h"
 struct tcp_fastopen_request *fastopen_req;



 struct request_sock *fastopen_rsk;
 struct saved_syn *saved_syn;
};

enum tsq_enum {
 TSQ_THROTTLED,
 TSQ_QUEUED,
 TCP_TSQ_DEFERRED,
 TCP_WRITE_TIMER_DEFERRED,
 TCP_DELACK_TIMER_DEFERRED,
 TCP_MTU_REDUCED_DEFERRED,


 TCP_ACK_DEFERRED,
};

enum tsq_flags {
 TSQF_THROTTLED = ((((1UL))) << (TSQ_THROTTLED)),
 TSQF_QUEUED = ((((1UL))) << (TSQ_QUEUED)),
 TCPF_TSQ_DEFERRED = ((((1UL))) << (TCP_TSQ_DEFERRED)),
 TCPF_WRITE_TIMER_DEFERRED = ((((1UL))) << (TCP_WRITE_TIMER_DEFERRED)),
 TCPF_DELACK_TIMER_DEFERRED = ((((1UL))) << (TCP_DELACK_TIMER_DEFERRED)),
 TCPF_MTU_REDUCED_DEFERRED = ((((1UL))) << (TCP_MTU_REDUCED_DEFERRED)),
 TCPF_ACK_DEFERRED = ((((1UL))) << (TCP_ACK_DEFERRED)),
};
# 535 "../include/linux/tcp.h"
struct tcp_timewait_sock {
 struct inet_timewait_sock tw_sk;


 u32 tw_rcv_wnd;
 u32 tw_ts_offset;
 u32 tw_ts_recent;


 u32 tw_last_oow_ack_time;

 int tw_ts_recent_stamp;
 u32 tw_tx_delay;






};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct tcp_timewait_sock *tcp_twsk(const struct sock *sk)
{
 return (struct tcp_timewait_sock *)sk;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool tcp_passive_fastopen(const struct sock *sk)
{
 return sk->__sk_common.skc_state == TCP_SYN_RECV &&
        ({ typeof(*(_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) *__UNIQUE_ID_rcu507 = (typeof(*(_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_508(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) == sizeof(char) || sizeof((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) == sizeof(short) || sizeof((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) == sizeof(int) || sizeof((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) == sizeof(long)) || sizeof((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) == sizeof(long long))) __compiletime_assert_508(); } while (0); (*(const volatile typeof( _Generic(((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)))) *)&((_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk))); }); ; ((typeof(*(_Generic(sk, const typeof(*(sk)) *: ((const struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })), default: ((struct tcp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct tcp_sock *)0)->inet_conn.icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct tcp_sock *)(__mptr - __builtin_offsetof(struct tcp_sock, inet_conn.icsk_inet.sk))); })) )->fastopen_rsk)) *)(__UNIQUE_ID_rcu507)); }) != ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fastopen_queue_tune(struct sock *sk, int backlog)
{
 struct request_sock_queue *queue = &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })), default: ((struct inet_connection_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_connection_sock *)0)->icsk_inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_connection_sock *)(__mptr - __builtin_offsetof(struct inet_connection_sock, icsk_inet.sk))); })) )->icsk_accept_queue;
 int somaxconn = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_509(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sock_net(sk)->core.sysctl_somaxconn) == sizeof(char) || sizeof(sock_net(sk)->core.sysctl_somaxconn) == sizeof(short) || sizeof(sock_net(sk)->core.sysctl_somaxconn) == sizeof(int) || sizeof(sock_net(sk)->core.sysctl_somaxconn) == sizeof(long)) || sizeof(sock_net(sk)->core.sysctl_somaxconn) == sizeof(long long))) __compiletime_assert_509(); } while (0); (*(const volatile typeof( _Generic((sock_net(sk)->core.sysctl_somaxconn), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sock_net(sk)->core.sysctl_somaxconn))) *)&(sock_net(sk)->core.sysctl_somaxconn)); });

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_512(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(queue->fastopenq.max_qlen) == sizeof(char) || sizeof(queue->fastopenq.max_qlen) == sizeof(short) || sizeof(queue->fastopenq.max_qlen) == sizeof(int) || sizeof(queue->fastopenq.max_qlen) == sizeof(long)) || sizeof(queue->fastopenq.max_qlen) == sizeof(long long))) __compiletime_assert_512(); } while (0); do { *(volatile typeof(queue->fastopenq.max_qlen) *)&(queue->fastopenq.max_qlen) = (({ unsigned int __UNIQUE_ID_x_510 = (backlog); unsigned int __UNIQUE_ID_y_511 = (somaxconn); ((__UNIQUE_ID_x_510) < (__UNIQUE_ID_y_511) ? (__UNIQUE_ID_x_510) : (__UNIQUE_ID_y_511)); })); } while (0); } while (0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tcp_move_syn(struct tcp_sock *tp,
    struct request_sock *req)
{
 tp->saved_syn = req->saved_syn;
 req->saved_syn = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void tcp_saved_syn_free(struct tcp_sock *tp)
{
 kfree(tp->saved_syn);
 tp->saved_syn = ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 tcp_saved_syn_len(const struct saved_syn *saved_syn)
{
 return saved_syn->mac_hdrlen + saved_syn->network_hdrlen +
  saved_syn->tcp_hdrlen;
}

struct sk_buff *tcp_get_timestamping_opt_stats(const struct sock *sk,
            const struct sk_buff *orig_skb,
            const struct sk_buff *ack_skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 tcp_mss_clamp(const struct tcp_sock *tp, u16 mss)
{



 u16 user_mss = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_513(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(tp->rx_opt.user_mss) == sizeof(char) || sizeof(tp->rx_opt.user_mss) == sizeof(short) || sizeof(tp->rx_opt.user_mss) == sizeof(int) || sizeof(tp->rx_opt.user_mss) == sizeof(long)) || sizeof(tp->rx_opt.user_mss) == sizeof(long long))) __compiletime_assert_513(); } while (0); (*(const volatile typeof( _Generic((tp->rx_opt.user_mss), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (tp->rx_opt.user_mss))) *)&(tp->rx_opt.user_mss)); });

 return (user_mss && user_mss < mss) ? user_mss : mss;
}

int tcp_skb_shift(struct sk_buff *to, struct sk_buff *from, int pcount,
    int shiftlen);

void __tcp_sock_set_cork(struct sock *sk, bool on);
void tcp_sock_set_cork(struct sock *sk, bool on);
int tcp_sock_set_keepcnt(struct sock *sk, int val);
int tcp_sock_set_keepidle_locked(struct sock *sk, int val);
int tcp_sock_set_keepidle(struct sock *sk, int val);
int tcp_sock_set_keepintvl(struct sock *sk, int val);
void __tcp_sock_set_nodelay(struct sock *sk, bool on);
void tcp_sock_set_nodelay(struct sock *sk);
void tcp_sock_set_quickack(struct sock *sk, int val);
int tcp_sock_set_syncnt(struct sock *sk, int val);
int tcp_sock_set_user_timeout(struct sock *sk, int val);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dst_tcp_usec_ts(const struct dst_entry *dst)
{
 return dst_feature(dst, (1 << 4));
}
# 102 "../include/linux/ipv6.h" 2
# 1 "../include/linux/udp.h" 1
# 19 "../include/linux/udp.h"
# 1 "../include/uapi/linux/udp.h" 1
# 23 "../include/uapi/linux/udp.h"
struct udphdr {
 __be16 source;
 __be16 dest;
 __be16 len;
 __sum16 check;
};
# 20 "../include/linux/udp.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct udphdr *udp_hdr(const struct sk_buff *skb)
{
 return (struct udphdr *)skb_transport_header(skb);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 udp_hashfn(const struct net *net, u32 num, u32 mask)
{
 return (num + net_hash_mix(net)) & mask;
}

enum {
 UDP_FLAGS_CORK,
 UDP_FLAGS_NO_CHECK6_TX,
 UDP_FLAGS_NO_CHECK6_RX,
 UDP_FLAGS_GRO_ENABLED,
 UDP_FLAGS_ACCEPT_FRAGLIST,
 UDP_FLAGS_ACCEPT_L4,
 UDP_FLAGS_ENCAP_ENABLED,
 UDP_FLAGS_UDPLITE_SEND_CC,
 UDP_FLAGS_UDPLITE_RECV_CC,
};

struct udp_sock {

 struct inet_sock inet;




 unsigned long udp_flags;

 int pending;
 __u8 encap_type;





 __u16 len;
 __u16 gso_size;



 __u16 pcslen;
 __u16 pcrlen;



 int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
 void (*encap_err_rcv)(struct sock *sk, struct sk_buff *skb, int err,
         __be16 port, u32 info, u8 *payload);
 int (*encap_err_lookup)(struct sock *sk, struct sk_buff *skb);
 void (*encap_destroy)(struct sock *sk);


 struct sk_buff * (*gro_receive)(struct sock *sk,
            struct list_head *head,
            struct sk_buff *skb);
 int (*gro_complete)(struct sock *sk,
      struct sk_buff *skb,
      int nhoff);


 struct sk_buff_head reader_queue ;


 int forward_deficit;


 int forward_threshold;


 bool peeking_with_offset;
};
# 115 "../include/linux/udp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int udp_set_peek_off(struct sock *sk, int val)
{
 sk_set_peek_off(sk, val);
 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_514(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) == sizeof(long long))) __compiletime_assert_514(); } while (0); do { *(volatile typeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->peeking_with_offset) = (val >= 0); } while (0); } while (0);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void udp_set_no_check6_tx(struct sock *sk, bool val)
{
 ((val) ? set_bit((UDP_FLAGS_NO_CHECK6_TX), (&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)) : clear_bit((UDP_FLAGS_NO_CHECK6_TX), (&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void udp_set_no_check6_rx(struct sock *sk, bool val)
{
 ((val) ? set_bit((UDP_FLAGS_NO_CHECK6_RX), (&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)) : clear_bit((UDP_FLAGS_NO_CHECK6_RX), (&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool udp_get_no_check6_tx(const struct sock *sk)
{
 return ((__builtin_constant_p(UDP_FLAGS_NO_CHECK6_TX) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags))) ? const_test_bit(UDP_FLAGS_NO_CHECK6_TX, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) : arch_test_bit(UDP_FLAGS_NO_CHECK6_TX, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool udp_get_no_check6_rx(const struct sock *sk)
{
 return ((__builtin_constant_p(UDP_FLAGS_NO_CHECK6_RX) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags))) ? const_test_bit(UDP_FLAGS_NO_CHECK6_RX, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) : arch_test_bit(UDP_FLAGS_NO_CHECK6_RX, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
     struct sk_buff *skb)
{
 int gso_size;

 if (((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & SKB_GSO_UDP_L4) {
  gso_size = ((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_size;
  put_cmsg(msg, 17, 104, sizeof(gso_size), &gso_size);
 }
}

extern struct static_key_false udp_encap_needed_key;

extern struct static_key_false udpv6_encap_needed_key;


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool udp_encap_needed(void)
{
 if (__builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&udp_encap_needed_key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&udp_encap_needed_key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&udp_encap_needed_key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&udp_encap_needed_key)->key) > 0; })), 0))
  return true;


 if (__builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&udpv6_encap_needed_key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&udpv6_encap_needed_key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&udpv6_encap_needed_key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&udpv6_encap_needed_key)->key) > 0; })), 0))
  return true;


 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
{
 if (!skb_is_gso(skb))
  return false;

 if (((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & SKB_GSO_UDP_L4 &&
     !((__builtin_constant_p(UDP_FLAGS_ACCEPT_L4) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags))) ? const_test_bit(UDP_FLAGS_ACCEPT_L4, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) : arch_test_bit(UDP_FLAGS_ACCEPT_L4, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)))
  return true;

 if (((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type & SKB_GSO_FRAGLIST &&
     !((__builtin_constant_p(UDP_FLAGS_ACCEPT_FRAGLIST) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags))) ? const_test_bit(UDP_FLAGS_ACCEPT_FRAGLIST, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags) : arch_test_bit(UDP_FLAGS_ACCEPT_FRAGLIST, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags)))
  return true;





 if (udp_encap_needed() &&
     ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_515(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv) == sizeof(long long))) __compiletime_assert_515(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->encap_rcv)); }) &&
     !(((struct skb_shared_info *)(skb_end_pointer(skb)))->gso_type &
       (SKB_GSO_UDP_TUNNEL | SKB_GSO_UDP_TUNNEL_CSUM)))
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void udp_allow_gso(struct sock *sk)
{
 set_bit(UDP_FLAGS_ACCEPT_L4, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags);
 set_bit(UDP_FLAGS_ACCEPT_FRAGLIST, &_Generic(sk, const typeof(*(sk)) *: ((const struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })), default: ((struct udp_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct udp_sock *)0)->inet.sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct udp_sock *)(__mptr - __builtin_offsetof(struct udp_sock, inet.sk))); })) )->udp_flags);
}
# 103 "../include/linux/ipv6.h" 2



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6hdr *ipv6_hdr(const struct sk_buff *skb)
{
 return (struct ipv6hdr *)skb_network_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6hdr *inner_ipv6_hdr(const struct sk_buff *skb)
{
 return (struct ipv6hdr *)skb_inner_network_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6hdr *ipipv6_hdr(const struct sk_buff *skb)
{
 return (struct ipv6hdr *)skb_transport_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ipv6_transport_len(const struct sk_buff *skb)
{
 return (__u16)(__builtin_constant_p(( __u16)(__be16)(ipv6_hdr(skb)->payload_len)) ? ((__u16)( (((__u16)(( __u16)(__be16)(ipv6_hdr(skb)->payload_len)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(ipv6_hdr(skb)->payload_len)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(ipv6_hdr(skb)->payload_len))) + sizeof(struct ipv6hdr) -
        skb_network_header_len(skb);
}






struct inet6_skb_parm {
 int iif;
 __be16 ra;
 __u16 dst0;
 __u16 srcrt;
 __u16 dst1;
 __u16 lastopt;
 __u16 nhoff;
 __u16 flags;

 __u16 dsthao;

 __u16 frag_max_size;
 __u16 srhoff;
# 158 "../include/linux/ipv6.h"
};







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_l3mdev_skb(__u16 flags)
{
 return false;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet6_iif(const struct sk_buff *skb)
{
 bool l3_slave = ipv6_l3mdev_skb(((struct inet6_skb_parm*)((skb)->cb))->flags);

 return l3_slave ? skb->skb_iif : ((struct inet6_skb_parm*)((skb)->cb))->iif;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet6_is_jumbogram(const struct sk_buff *skb)
{
 return !!(((struct inet6_skb_parm*)((skb)->cb))->flags & 128);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet6_sdif(const struct sk_buff *skb)
{




 return 0;
}

struct tcp6_request_sock {
 struct tcp_request_sock tcp6rsk_tcp;
};

struct ipv6_mc_socklist;
struct ipv6_ac_socklist;
struct ipv6_fl_socklist;

struct inet6_cork {
 struct ipv6_txoptions *opt;
 u8 hop_limit;
 u8 tclass;
};


struct ipv6_pinfo {
 struct in6_addr saddr;
 struct in6_pktinfo sticky_pktinfo;
 const struct in6_addr *daddr_cache;




 __be32 flow_label;
 __u32 frag_size;

 s16 hop_limit;
 u8 mcast_hops;

 int ucast_oif;
 int mcast_oif;


 union {
  struct {
   __u16 srcrt:1,
    osrcrt:1,
           rxinfo:1,
           rxoinfo:1,
    rxhlim:1,
    rxohlim:1,
    hopopts:1,
    ohopopts:1,
    dstopts:1,
    odstopts:1,
                                rxflow:1,
    rxtclass:1,
    rxpmtu:1,
    rxorigdstaddr:1,
    recvfragsize:1;

  } bits;
  __u16 all;
 } rxopt;


 __u8 srcprefs;



 __u8 pmtudisc;
 __u8 min_hopcount;
 __u8 tclass;
 __be32 rcv_flowinfo;

 __u32 dst_cookie;

 struct ipv6_mc_socklist *ipv6_mc_list;
 struct ipv6_ac_socklist *ipv6_ac_list;
 struct ipv6_fl_socklist *ipv6_fl_list;

 struct ipv6_txoptions *opt;
 struct sk_buff *pktoptions;
 struct sk_buff *rxpmtu;
 struct inet6_cork cork;
};
# 287 "../include/linux/ipv6.h"
struct raw6_sock {

 struct inet_sock inet;
 __u32 checksum;
 __u32 offset;
 struct icmp6_filter filter;
 __u32 ip6mr_table;

 struct ipv6_pinfo inet6;
};

struct udp6_sock {
 struct udp_sock udp;

 struct ipv6_pinfo inet6;
};

struct tcp6_sock {
 struct tcp_sock tcp;

 struct ipv6_pinfo inet6;
};

extern int inet6_sk_rebuild_header(struct sock *sk);

struct tcp6_timewait_sock {
 struct tcp_timewait_sock tcp6tw_tcp;
};


bool ipv6_mod_enabled(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6_pinfo *inet6_sk(const struct sock *__sk)
{
 return sk_fullsock(__sk) ? _Generic(__sk, const typeof(*(__sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(__sk); _Static_assert(__builtin_types_compatible_p(typeof(*(__sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(__sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(__sk); _Static_assert(__builtin_types_compatible_p(typeof(*(__sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(__sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pinet6 : ((void *)0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct in6_addr *inet6_rcv_saddr(const struct sock *sk)
{
 if (sk->__sk_common.skc_family == 10)
  return &sk->__sk_common.skc_v6_rcv_saddr;
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_v6_ipv6only(const struct sock *sk)
{

 return (sk->__sk_common.skc_ipv6only);
}
# 13 "../include/net/ipv6.h" 2



# 1 "../include/linux/jump_label_ratelimit.h" 1
# 64 "../include/linux/jump_label_ratelimit.h"
struct static_key_deferred {
 struct static_key key;
};
struct static_key_true_deferred {
 struct static_key_true key;
};
struct static_key_false_deferred {
 struct static_key_false key;
};







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void static_key_slow_dec_deferred(struct static_key_deferred *key)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label_ratelimit.h", 82, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 static_key_slow_dec(&key->key);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void static_key_deferred_flush(void *key)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label_ratelimit.h", 87, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
jump_label_rate_limit(struct static_key_deferred *key,
  unsigned long rl)
{
 ({ int __ret_warn_on = !!(!static_key_initialized); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/linux/jump_label_ratelimit.h", 93, 9, "%s(): static key '%pS' used before call to jump_label_init()", __func__, (key)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
}
# 17 "../include/net/ipv6.h" 2
# 1 "../include/net/if_inet6.h" 1
# 25 "../include/net/if_inet6.h"
enum {
 INET6_IFADDR_STATE_PREDAD,
 INET6_IFADDR_STATE_DAD,
 INET6_IFADDR_STATE_POSTDAD,
 INET6_IFADDR_STATE_ERRDAD,
 INET6_IFADDR_STATE_DEAD,
};

struct inet6_ifaddr {
 struct in6_addr addr;
 __u32 prefix_len;
 __u32 rt_priority;


 __u32 valid_lft;
 __u32 prefered_lft;
 refcount_t refcnt;
 spinlock_t lock;

 int state;

 __u32 flags;
 __u8 dad_probes;
 __u8 stable_privacy_retry;

 __u16 scope;
 __u64 dad_nonce;

 unsigned long cstamp;
 unsigned long tstamp;

 struct delayed_work dad_work;

 struct inet6_dev *idev;
 struct fib6_info *rt;

 struct hlist_node addr_lst;
 struct list_head if_list;







 struct list_head if_list_aux;

 struct list_head tmp_list;
 struct inet6_ifaddr *ifpub;
 int regen_count;

 bool tokenized;

 u8 ifa_proto;

 struct callback_head rcu;
 struct in6_addr peer_addr;
};

struct ip6_sf_socklist {
 unsigned int sl_max;
 unsigned int sl_count;
 struct callback_head rcu;
 struct in6_addr sl_addr[] __attribute__((__counted_by__(sl_max)));
};



struct ipv6_mc_socklist {
 struct in6_addr addr;
 int ifindex;
 unsigned int sfmode;
 struct ipv6_mc_socklist *next;
 struct ip6_sf_socklist *sflist;
 struct callback_head rcu;
};

struct ip6_sf_list {
 struct ip6_sf_list *sf_next;
 struct in6_addr sf_addr;
 unsigned long sf_count[2];
 unsigned char sf_gsresp;
 unsigned char sf_oldin;
 unsigned char sf_crcount;
 struct callback_head rcu;
};







struct ifmcaddr6 {
 struct in6_addr mca_addr;
 struct inet6_dev *idev;
 struct ifmcaddr6 *next;
 struct ip6_sf_list *mca_sources;
 struct ip6_sf_list *mca_tomb;
 unsigned int mca_sfmode;
 unsigned char mca_crcount;
 unsigned long mca_sfcount[2];
 struct delayed_work mca_work;
 unsigned int mca_flags;
 int mca_users;
 refcount_t mca_refcnt;
 unsigned long mca_cstamp;
 unsigned long mca_tstamp;
 struct callback_head rcu;
};



struct ipv6_ac_socklist {
 struct in6_addr acl_addr;
 int acl_ifindex;
 struct ipv6_ac_socklist *acl_next;
};

struct ifacaddr6 {
 struct in6_addr aca_addr;
 struct fib6_info *aca_rt;
 struct ifacaddr6 *aca_next;
 struct hlist_node aca_addr_lst;
 int aca_users;
 refcount_t aca_refcnt;
 unsigned long aca_cstamp;
 unsigned long aca_tstamp;
 struct callback_head rcu;
};





struct ipv6_devstat {
 struct proc_dir_entry *proc_dir_entry;
 __typeof__(struct ipstats_mib) *ipv6;
 __typeof__(struct icmpv6_mib_device) *icmpv6dev;
 __typeof__(struct icmpv6msg_mib_device) *icmpv6msgdev;
};

struct inet6_dev {
 struct net_device *dev;
 netdevice_tracker dev_tracker;

 struct list_head addr_list;

 struct ifmcaddr6 *mc_list;
 struct ifmcaddr6 *mc_tomb;

 unsigned char mc_qrv;
 unsigned char mc_gq_running;
 unsigned char mc_ifc_count;
 unsigned char mc_dad_count;

 unsigned long mc_v1_seen;
 unsigned long mc_qi;
 unsigned long mc_qri;
 unsigned long mc_maxdelay;

 struct delayed_work mc_gq_work;
 struct delayed_work mc_ifc_work;
 struct delayed_work mc_dad_work;
 struct delayed_work mc_query_work;
 struct delayed_work mc_report_work;

 struct sk_buff_head mc_query_queue;
 struct sk_buff_head mc_report_queue;

 spinlock_t mc_query_lock;
 spinlock_t mc_report_lock;
 struct mutex mc_lock;

 struct ifacaddr6 *ac_list;
 rwlock_t lock;
 refcount_t refcnt;
 __u32 if_flags;
 int dead;

 u32 desync_factor;
 struct list_head tempaddr_list;

 struct in6_addr token;

 struct neigh_parms *nd_parms;
 struct ipv6_devconf cnf;
 struct ipv6_devstat stats;

 struct timer_list rs_timer;
 __s32 rs_interval;
 __u8 rs_probes;

 unsigned long tstamp;
 struct callback_head rcu;

 unsigned int ra_mtu;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_eth_mc_map(const struct in6_addr *addr, char *buf)
{






 buf[0]= 0x33;
 buf[1]= 0x33;

 memcpy(buf + 2, &addr->in6_u.u6_addr32[3], sizeof(__u32));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_arcnet_mc_map(const struct in6_addr *addr, char *buf)
{
 buf[0] = 0x00;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_ib_mc_map(const struct in6_addr *addr,
      const unsigned char *broadcast, char *buf)
{
 unsigned char scope = broadcast[5] & 0xF;

 buf[0] = 0;
 buf[1] = 0xff;
 buf[2] = 0xff;
 buf[3] = 0xff;
 buf[4] = 0xff;
 buf[5] = 0x10 | scope;
 buf[6] = 0x60;
 buf[7] = 0x1b;
 buf[8] = broadcast[8];
 buf[9] = broadcast[9];
 memcpy(buf + 10, addr->in6_u.u6_addr8 + 6, 10);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_ipgre_mc_map(const struct in6_addr *addr,
        const unsigned char *broadcast, char *buf)
{
 if ((broadcast[0] | broadcast[1] | broadcast[2] | broadcast[3]) != 0) {
  memcpy(buf, broadcast, 4);
 } else {

  if ((addr->in6_u.u6_addr32[0] | addr->in6_u.u6_addr32[1] |
       (addr->in6_u.u6_addr32[2] ^ (( __be32)(__u32)(__builtin_constant_p((0x0000ffff)) ? ((__u32)( (((__u32)((0x0000ffff)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0000ffff)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0000ffff)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0000ffff)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0000ffff)))))) != 0)
   return -22;
  memcpy(buf, &addr->in6_u.u6_addr32[3], 4);
 }
 return 0;
}
# 18 "../include/net/ipv6.h" 2


# 1 "../include/net/inet_dscp.h" 1
# 38 "../include/net/inet_dscp.h"
typedef u8 dscp_t;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dscp_t inet_dsfield_to_dscp(__u8 dsfield)
{
 return ( dscp_t)(dsfield & 0xfc);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 inet_dscp_to_dsfield(dscp_t dscp)
{
 return ( __u8)dscp;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_validate_dscp(__u8 val)
{
 return !(val & ~0xfc);
}
# 21 "../include/net/ipv6.h" 2



struct ip_tunnel_info;
# 147 "../include/net/ipv6.h"
struct frag_hdr {
 __u8 nexthdr;
 __u8 reserved;
 __be16 frag_off;
 __be32 identification;
};




struct hop_jumbo_hdr {
 u8 nexthdr;
 u8 hdrlen;
 u8 tlv_type;
 u8 tlv_len;
 __be32 jumbo_payload_len;
};




struct ip6_fraglist_iter {
 struct ipv6hdr *tmp_hdr;
 struct sk_buff *frag;
 int offset;
 unsigned int hlen;
 __be32 frag_id;
 u8 nexthdr;
};

int ip6_fraglist_init(struct sk_buff *skb, unsigned int hlen, u8 *prevhdr,
        u8 nexthdr, __be32 frag_id,
        struct ip6_fraglist_iter *iter);
void ip6_fraglist_prepare(struct sk_buff *skb, struct ip6_fraglist_iter *iter);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *ip6_fraglist_next(struct ip6_fraglist_iter *iter)
{
 struct sk_buff *skb = iter->frag;

 iter->frag = skb->next;
 skb_mark_not_on_list(skb);

 return skb;
}

struct ip6_frag_state {
 u8 *prevhdr;
 unsigned int hlen;
 unsigned int mtu;
 unsigned int left;
 int offset;
 int ptr;
 int hroom;
 int troom;
 __be32 frag_id;
 u8 nexthdr;
};

void ip6_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int mtu,
     unsigned short needed_tailroom, int hdr_room, u8 *prevhdr,
     u8 nexthdr, __be32 frag_id, struct ip6_frag_state *state);
struct sk_buff *ip6_frag_next(struct sk_buff *skb,
         struct ip6_frag_state *state);







extern int sysctl_mld_max_msf;
extern int sysctl_mld_qrv;
# 286 "../include/net/ipv6.h"
struct ip6_ra_chain {
 struct ip6_ra_chain *next;
 struct sock *sk;
 int sel;
 void (*destructor)(struct sock *);
};

extern struct ip6_ra_chain *ip6_ra_chain;
extern rwlock_t ip6_ra_lock;






struct ipv6_txoptions {
 refcount_t refcnt;

 int tot_len;



 __u16 opt_flen;
 __u16 opt_nflen;

 struct ipv6_opt_hdr *hopopt;
 struct ipv6_opt_hdr *dst0opt;
 struct ipv6_rt_hdr *srcrt;
 struct ipv6_opt_hdr *dst1opt;
 struct callback_head rcu;

};


enum flowlabel_reflect {
 FLOWLABEL_REFLECT_ESTABLISHED = 1,
 FLOWLABEL_REFLECT_TCP_RESET = 2,
 FLOWLABEL_REFLECT_ICMPV6_ECHO_REPLIES = 4,
};

struct ip6_flowlabel {
 struct ip6_flowlabel *next;
 __be32 label;
 atomic_t users;
 struct in6_addr dst;
 struct ipv6_txoptions *opt;
 unsigned long linger;
 struct callback_head rcu;
 u8 share;
 union {
  struct pid *pid;
  kuid_t uid;
 } owner;
 unsigned long lastuse;
 unsigned long expires;
 struct net *fl_net;
};
# 351 "../include/net/ipv6.h"
struct ipv6_fl_socklist {
 struct ipv6_fl_socklist *next;
 struct ip6_flowlabel *fl;
 struct callback_head rcu;
};

struct ipcm6_cookie {
 struct sockcm_cookie sockc;
 __s16 hlimit;
 __s16 tclass;
 __u16 gso_size;
 __s8 dontfrag;
 struct ipv6_txoptions *opt;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipcm6_init(struct ipcm6_cookie *ipc6)
{
 *ipc6 = (struct ipcm6_cookie) {
  .hlimit = -1,
  .tclass = -1,
  .dontfrag = -1,
 };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipcm6_init_sk(struct ipcm6_cookie *ipc6,
     const struct sock *sk)
{
 *ipc6 = (struct ipcm6_cookie) {
  .hlimit = -1,
  .tclass = inet6_sk(sk)->tclass,
  .dontfrag = ((__builtin_constant_p(INET_FLAGS_DONTFRAG) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags))) ? const_test_bit(INET_FLAGS_DONTFRAG, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) : arch_test_bit(INET_FLAGS_DONTFRAG, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags)),
 };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6_txoptions *txopt_get(const struct ipv6_pinfo *np)
{
 struct ipv6_txoptions *opt;

 rcu_read_lock();
 opt = ({ typeof(*(np->opt)) *__UNIQUE_ID_rcu516 = (typeof(*(np->opt)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_517(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((np->opt)) == sizeof(char) || sizeof((np->opt)) == sizeof(short) || sizeof((np->opt)) == sizeof(int) || sizeof((np->opt)) == sizeof(long)) || sizeof((np->opt)) == sizeof(long long))) __compiletime_assert_517(); } while (0); (*(const volatile typeof( _Generic(((np->opt)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((np->opt)))) *)&((np->opt))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/ipv6.h", 390, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(np->opt)) *)(__UNIQUE_ID_rcu516)); });
 if (opt) {
  if (!refcount_inc_not_zero(&opt->refcnt))
   opt = ((void *)0);
  else
   opt = (opt);
 }
 rcu_read_unlock();
 return opt;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void txopt_put(struct ipv6_txoptions *opt)
{
 if (opt && refcount_dec_and_test(&opt->refcnt))
  do { typeof (opt) ___p = (opt); if (___p) { do { __attribute__((__noreturn__)) extern void __compiletime_assert_518(void) __attribute__((__error__("BUILD_BUG_ON failed: " "!__is_kvfree_rcu_offset(offsetof(typeof(*(opt)), rcu))"))); if (!(!(!((__builtin_offsetof(typeof(*(opt)), rcu)) < 4096)))) __compiletime_assert_518(); } while (0); kvfree_call_rcu(&((___p)->rcu), (void *) (___p)); } } while (0);
}


struct ip6_flowlabel *__fl6_sock_lookup(struct sock *sk, __be32 label);

extern struct static_key_false_deferred ipv6_flowlabel_exclusive;
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk,
          __be32 label)
{
 if (__builtin_expect(!!(({ if (!__builtin_types_compatible_p(typeof(*&(&ipv6_flowlabel_exclusive.key)->key), struct static_key) && !__builtin_types_compatible_p(typeof(*&(&ipv6_flowlabel_exclusive.key)->key), struct static_key_true) && !__builtin_types_compatible_p(typeof(*&(&ipv6_flowlabel_exclusive.key)->key), struct static_key_false)) ____wrong_branch_error(); static_key_count((struct static_key *)&(&ipv6_flowlabel_exclusive.key)->key) > 0; })), 0) &&
     ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_519(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sock_net(sk)->ipv6.flowlabel_has_excl) == sizeof(char) || sizeof(sock_net(sk)->ipv6.flowlabel_has_excl) == sizeof(short) || sizeof(sock_net(sk)->ipv6.flowlabel_has_excl) == sizeof(int) || sizeof(sock_net(sk)->ipv6.flowlabel_has_excl) == sizeof(long)) || sizeof(sock_net(sk)->ipv6.flowlabel_has_excl) == sizeof(long long))) __compiletime_assert_519(); } while (0); (*(const volatile typeof( _Generic((sock_net(sk)->ipv6.flowlabel_has_excl), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sock_net(sk)->ipv6.flowlabel_has_excl))) *)&(sock_net(sk)->ipv6.flowlabel_has_excl)); }))
  return __fl6_sock_lookup(sk, label) ? : ERR_PTR(-2);

 return ((void *)0);
}


struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space,
      struct ip6_flowlabel *fl,
      struct ipv6_txoptions *fopt);
void fl6_free_socklist(struct sock *sk);
int ipv6_flowlabel_opt(struct sock *sk, sockptr_t optval, int optlen);
int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
      int flags);
int ip6_flowlabel_init(void);
void ip6_flowlabel_cleanup(void);
bool ip6_autoflowlabel(struct net *net, const struct sock *sk);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fl6_sock_release(struct ip6_flowlabel *fl)
{
 if (fl)
  atomic_dec(&fl->users);
}

enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
       u8 code, __be32 info);

void icmpv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
    struct icmp6hdr *thdr, int len);

int ip6_ra_control(struct sock *sk, int sel);

int ipv6_parse_hopopts(struct sk_buff *skb);

struct ipv6_txoptions *ipv6_dup_options(struct sock *sk,
     struct ipv6_txoptions *opt);
struct ipv6_txoptions *ipv6_renew_options(struct sock *sk,
       struct ipv6_txoptions *opt,
       int newtype,
       struct ipv6_opt_hdr *newopt);
struct ipv6_txoptions *__ipv6_fixup_options(struct ipv6_txoptions *opt_space,
         struct ipv6_txoptions *opt);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ipv6_txoptions *
ipv6_fixup_options(struct ipv6_txoptions *opt_space, struct ipv6_txoptions *opt)
{
 if (!opt)
  return ((void *)0);
 return __ipv6_fixup_options(opt_space, opt);
}

bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb,
         const struct inet6_skb_parm *opt);
struct ipv6_txoptions *ipv6_update_options(struct sock *sk,
        struct ipv6_txoptions *opt);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_has_hopopt_jumbo(const struct sk_buff *skb)
{
 const struct hop_jumbo_hdr *jhdr;
 const struct ipv6hdr *nhdr;

 if (__builtin_expect(!!(skb->len <= 65536u), 1))
  return 0;

 if (skb->protocol != (( __be16)(__u16)(__builtin_constant_p((0x86DD)) ? ((__u16)( (((__u16)((0x86DD)) & (__u16)0x00ffU) << 8) | (((__u16)((0x86DD)) & (__u16)0xff00U) >> 8))) : __fswab16((0x86DD)))))
  return 0;

 if (skb_network_offset(skb) +
     sizeof(struct ipv6hdr) +
     sizeof(struct hop_jumbo_hdr) > skb_headlen(skb))
  return 0;

 nhdr = ipv6_hdr(skb);

 if (nhdr->nexthdr != 0)
  return 0;

 jhdr = (const struct hop_jumbo_hdr *) (nhdr + 1);
 if (jhdr->tlv_type != 194 || jhdr->hdrlen != 0 ||
     jhdr->nexthdr != IPPROTO_TCP)
  return 0;
 return jhdr->nexthdr;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_hopopt_jumbo_remove(struct sk_buff *skb)
{
 const int hophdr_len = sizeof(struct hop_jumbo_hdr);
 int nexthdr = ipv6_has_hopopt_jumbo(skb);
 struct ipv6hdr *h6;

 if (!nexthdr)
  return 0;

 if (skb_cow_head(skb, 0))
  return -1;




 memmove(skb_mac_header(skb) + hophdr_len, skb_mac_header(skb),
  skb_network_header(skb) - skb_mac_header(skb) +
  sizeof(struct ipv6hdr));

 __skb_pull(skb, hophdr_len);
 skb->network_header += hophdr_len;
 skb->mac_header += hophdr_len;

 h6 = ipv6_hdr(skb);
 h6->nexthdr = nexthdr;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_accept_ra(const struct inet6_dev *idev)
{
 s32 accept_ra = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_520(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(idev->cnf.accept_ra) == sizeof(char) || sizeof(idev->cnf.accept_ra) == sizeof(short) || sizeof(idev->cnf.accept_ra) == sizeof(int) || sizeof(idev->cnf.accept_ra) == sizeof(long)) || sizeof(idev->cnf.accept_ra) == sizeof(long long))) __compiletime_assert_520(); } while (0); (*(const volatile typeof( _Generic((idev->cnf.accept_ra), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (idev->cnf.accept_ra))) *)&(idev->cnf.accept_ra)); });




 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_521(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(idev->cnf.forwarding) == sizeof(char) || sizeof(idev->cnf.forwarding) == sizeof(short) || sizeof(idev->cnf.forwarding) == sizeof(int) || sizeof(idev->cnf.forwarding) == sizeof(long)) || sizeof(idev->cnf.forwarding) == sizeof(long long))) __compiletime_assert_521(); } while (0); (*(const volatile typeof( _Generic((idev->cnf.forwarding), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (idev->cnf.forwarding))) *)&(idev->cnf.forwarding)); }) ? accept_ra == 2 :
  accept_ra;
}





int __ipv6_addr_type(const struct in6_addr *addr);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_addr_type(const struct in6_addr *addr)
{
 return __ipv6_addr_type(addr) & 0xffff;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_addr_scope(const struct in6_addr *addr)
{
 return __ipv6_addr_type(addr) & 0x00f0U;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __ipv6_addr_src_scope(int type)
{
 return (type == 0x0000U) ? -1 : (type >> 16);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_addr_src_scope(const struct in6_addr *addr)
{
 return __ipv6_addr_src_scope(__ipv6_addr_type(addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __ipv6_addr_needs_scope_id(int type)
{
 return type & 0x0020U ||
        (type & 0x0002U &&
  (type & (0x0010U|0x0020U)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u32 ipv6_iface_scope_id(const struct in6_addr *addr, int iface)
{
 return __ipv6_addr_needs_scope_id(__ipv6_addr_type(addr)) ? iface : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_addr_cmp(const struct in6_addr *a1, const struct in6_addr *a2)
{
 return memcmp(a1, a2, sizeof(struct in6_addr));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
ipv6_masked_addr_cmp(const struct in6_addr *a1, const struct in6_addr *m,
       const struct in6_addr *a2)
{
# 602 "../include/net/ipv6.h"
 return !!(((a1->in6_u.u6_addr32[0] ^ a2->in6_u.u6_addr32[0]) & m->in6_u.u6_addr32[0]) |
    ((a1->in6_u.u6_addr32[1] ^ a2->in6_u.u6_addr32[1]) & m->in6_u.u6_addr32[1]) |
    ((a1->in6_u.u6_addr32[2] ^ a2->in6_u.u6_addr32[2]) & m->in6_u.u6_addr32[2]) |
    ((a1->in6_u.u6_addr32[3] ^ a2->in6_u.u6_addr32[3]) & m->in6_u.u6_addr32[3]));

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_addr_prefix(struct in6_addr *pfx,
        const struct in6_addr *addr,
        int plen)
{

 int o = plen >> 3,
     b = plen & 0x7;

 memset(pfx->in6_u.u6_addr8, 0, sizeof(pfx->in6_u.u6_addr8));
 memcpy(pfx->in6_u.u6_addr8, addr, o);
 if (b != 0)
  pfx->in6_u.u6_addr8[o] = addr->in6_u.u6_addr8[o] & (0xff00 >> b);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_addr_prefix_copy(struct in6_addr *addr,
      const struct in6_addr *pfx,
      int plen)
{

 int o = plen >> 3,
     b = plen & 0x7;

 memcpy(addr->in6_u.u6_addr8, pfx, o);
 if (b != 0) {
  addr->in6_u.u6_addr8[o] &= ~(0xff00 >> b);
  addr->in6_u.u6_addr8[o] |= (pfx->in6_u.u6_addr8[o] & (0xff00 >> b));
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __ipv6_addr_set_half(__be32 *addr,
     __be32 wh, __be32 wl)
{
# 654 "../include/net/ipv6.h"
 addr[0] = wh;
 addr[1] = wl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_addr_set(struct in6_addr *addr,
         __be32 w1, __be32 w2,
         __be32 w3, __be32 w4)
{
 __ipv6_addr_set_half(&addr->in6_u.u6_addr32[0], w1, w2);
 __ipv6_addr_set_half(&addr->in6_u.u6_addr32[2], w3, w4);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_equal(const struct in6_addr *a1,
       const struct in6_addr *a2)
{






 return ((a1->in6_u.u6_addr32[0] ^ a2->in6_u.u6_addr32[0]) |
  (a1->in6_u.u6_addr32[1] ^ a2->in6_u.u6_addr32[1]) |
  (a1->in6_u.u6_addr32[2] ^ a2->in6_u.u6_addr32[2]) |
  (a1->in6_u.u6_addr32[3] ^ a2->in6_u.u6_addr32[3])) == 0;

}
# 707 "../include/net/ipv6.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_prefix_equal(const struct in6_addr *addr1,
         const struct in6_addr *addr2,
         unsigned int prefixlen)
{
 const __be32 *a1 = addr1->in6_u.u6_addr32;
 const __be32 *a2 = addr2->in6_u.u6_addr32;
 unsigned int pdw, pbi;


 pdw = prefixlen >> 5;
 if (pdw && memcmp(a1, a2, pdw << 2))
  return false;


 pbi = prefixlen & 0x1f;
 if (pbi && ((a1[pdw] ^ a2[pdw]) & (( __be32)(__u32)(__builtin_constant_p(((0xffffffff) << (32 - pbi))) ? ((__u32)( (((__u32)(((0xffffffff) << (32 - pbi))) & (__u32)0x000000ffUL) << 24) | (((__u32)(((0xffffffff) << (32 - pbi))) & (__u32)0x0000ff00UL) << 8) | (((__u32)(((0xffffffff) << (32 - pbi))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(((0xffffffff) << (32 - pbi))) & (__u32)0xff000000UL) >> 24))) : __fswab32(((0xffffffff) << (32 - pbi)))))))
  return false;

 return true;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_any(const struct in6_addr *a)
{





 return (a->in6_u.u6_addr32[0] | a->in6_u.u6_addr32[1] |
  a->in6_u.u6_addr32[2] | a->in6_u.u6_addr32[3]) == 0;

}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ipv6_addr_hash(const struct in6_addr *a)
{






 return ( u32)(a->in6_u.u6_addr32[0] ^ a->in6_u.u6_addr32[1] ^
        a->in6_u.u6_addr32[2] ^ a->in6_u.u6_addr32[3]);

}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 __ipv6_addr_jhash(const struct in6_addr *a, const u32 initval)
{
 return jhash2(( const u32 *)a->in6_u.u6_addr32,
        (sizeof(a->in6_u.u6_addr32) / sizeof((a->in6_u.u6_addr32)[0]) + ((int)(sizeof(struct { int:(-!!(__builtin_types_compatible_p(typeof((a->in6_u.u6_addr32)), typeof(&(a->in6_u.u6_addr32)[0])))); })))), initval);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_loopback(const struct in6_addr *a)
{





 return (a->in6_u.u6_addr32[0] | a->in6_u.u6_addr32[1] |
  a->in6_u.u6_addr32[2] | (a->in6_u.u6_addr32[3] ^ (( __be32)(__u32)(__builtin_constant_p((1)) ? ((__u32)( (((__u32)((1)) & (__u32)0x000000ffUL) << 24) | (((__u32)((1)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((1)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((1)) & (__u32)0xff000000UL) >> 24))) : __fswab32((1)))))) == 0;

}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_v4mapped(const struct in6_addr *a)
{
 return (



  ( unsigned long)(a->in6_u.u6_addr32[0] | a->in6_u.u6_addr32[1]) |

  ( unsigned long)(a->in6_u.u6_addr32[2] ^
     (( __be32)(__u32)(__builtin_constant_p((0x0000ffff)) ? ((__u32)( (((__u32)((0x0000ffff)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0000ffff)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0000ffff)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0000ffff)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0000ffff)))))) == 0UL;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_v4mapped_loopback(const struct in6_addr *a)
{
 return ipv6_addr_v4mapped(a) && ipv4_is_loopback(a->in6_u.u6_addr32[3]);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ipv6_portaddr_hash(const struct net *net,
         const struct in6_addr *addr6,
         unsigned int port)
{
 unsigned int hash, mix = net_hash_mix(net);

 if (ipv6_addr_any(addr6))
  hash = jhash_1word(0, mix);
 else if (ipv6_addr_v4mapped(addr6))
  hash = jhash_1word(( u32)addr6->in6_u.u6_addr32[3], mix);
 else
  hash = jhash2(( u32 *)addr6->in6_u.u6_addr32, 4, mix);

 return hash ^ port;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_orchid(const struct in6_addr *a)
{
 return (a->in6_u.u6_addr32[0] & (( __be32)(__u32)(__builtin_constant_p((0xfffffff0)) ? ((__u32)( (((__u32)((0xfffffff0)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xfffffff0)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xfffffff0)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xfffffff0)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xfffffff0))))) == (( __be32)(__u32)(__builtin_constant_p((0x20010010)) ? ((__u32)( (((__u32)((0x20010010)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x20010010)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x20010010)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x20010010)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x20010010))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_addr_is_multicast(const struct in6_addr *addr)
{
 return (addr->in6_u.u6_addr32[0] & (( __be32)(__u32)(__builtin_constant_p((0xFF000000)) ? ((__u32)( (((__u32)((0xFF000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFF000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFF000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFF000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFF000000))))) == (( __be32)(__u32)(__builtin_constant_p((0xFF000000)) ? ((__u32)( (((__u32)((0xFF000000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFF000000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFF000000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFF000000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFF000000))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipv6_addr_set_v4mapped(const __be32 addr,
       struct in6_addr *v4mapped)
{
 ipv6_addr_set(v4mapped,
   0, 0,
   (( __be32)(__u32)(__builtin_constant_p((0x0000FFFF)) ? ((__u32)( (((__u32)((0x0000FFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0000FFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0000FFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0000FFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0000FFFF)))),
   addr);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __ipv6_addr_diff32(const void *token1, const void *token2, int addrlen)
{
 const __be32 *a1 = token1, *a2 = token2;
 int i;

 addrlen >>= 2;

 for (i = 0; i < addrlen; i++) {
  __be32 xb = a1[i] ^ a2[i];
  if (xb)
   return i * 32 + 31 - __fls((__u32)(__builtin_constant_p(( __u32)(__be32)(xb)) ? ((__u32)( (((__u32)(( __u32)(__be32)(xb)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(xb)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(xb)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(xb)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(xb))));
 }
# 866 "../include/net/ipv6.h"
 return addrlen << 5;
}
# 887 "../include/net/ipv6.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __ipv6_addr_diff(const void *token1, const void *token2, int addrlen)
{




 return __ipv6_addr_diff32(token1, token2, addrlen);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_addr *a2)
{
 return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr));
}

__be32 ipv6_select_ident(struct net *net,
    const struct in6_addr *daddr,
    const struct in6_addr *saddr);
__be32 ipv6_proxy_select_ident(struct net *net, struct sk_buff *skb);

int ip6_dst_hoplimit(struct dst_entry *dst);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip6_sk_dst_hoplimit(struct ipv6_pinfo *np, struct flowi6 *fl6,
          struct dst_entry *dst)
{
 int hlimit;

 if (ipv6_addr_is_multicast(&fl6->daddr))
  hlimit = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_522(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(np->mcast_hops) == sizeof(char) || sizeof(np->mcast_hops) == sizeof(short) || sizeof(np->mcast_hops) == sizeof(int) || sizeof(np->mcast_hops) == sizeof(long)) || sizeof(np->mcast_hops) == sizeof(long long))) __compiletime_assert_522(); } while (0); (*(const volatile typeof( _Generic((np->mcast_hops), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (np->mcast_hops))) *)&(np->mcast_hops)); });
 else
  hlimit = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_523(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(np->hop_limit) == sizeof(char) || sizeof(np->hop_limit) == sizeof(short) || sizeof(np->hop_limit) == sizeof(int) || sizeof(np->hop_limit) == sizeof(long)) || sizeof(np->hop_limit) == sizeof(long long))) __compiletime_assert_523(); } while (0); (*(const volatile typeof( _Generic((np->hop_limit), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (np->hop_limit))) *)&(np->hop_limit)); });
 if (hlimit < 0)
  hlimit = ip6_dst_hoplimit(dst);
 return hlimit;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iph_to_flow_copy_v6addrs(struct flow_keys *flow,
         const struct ipv6hdr *iph)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_524(void) __attribute__((__error__("BUILD_BUG_ON failed: " "offsetof(typeof(flow->addrs), v6addrs.dst) != offsetof(typeof(flow->addrs), v6addrs.src) + sizeof(flow->addrs.v6addrs.src)"))); if (!(!(__builtin_offsetof(typeof(flow->addrs), v6addrs.dst) != __builtin_offsetof(typeof(flow->addrs), v6addrs.src) + sizeof(flow->addrs.v6addrs.src)))) __compiletime_assert_524(); } while (0);


 memcpy(&flow->addrs.v6addrs, &iph->addrs, sizeof(flow->addrs.v6addrs));
 flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv6_can_nonlocal_bind(struct net *net,
       struct inet_sock *inet)
{
 return net->ipv6.sysctl.ip_nonlocal_bind ||
  ((__builtin_constant_p(INET_FLAGS_FREEBIND) && __builtin_constant_p((uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&inet->inet_flags))) ? const_test_bit(INET_FLAGS_FREEBIND, &inet->inet_flags) : arch_test_bit(INET_FLAGS_FREEBIND, &inet->inet_flags)) ||
  ((__builtin_constant_p(INET_FLAGS_TRANSPARENT) && __builtin_constant_p((uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&inet->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&inet->inet_flags))) ? const_test_bit(INET_FLAGS_TRANSPARENT, &inet->inet_flags) : arch_test_bit(INET_FLAGS_TRANSPARENT, &inet->inet_flags));
}
# 956 "../include/net/ipv6.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 ip6_make_flowlabel(struct net *net, struct sk_buff *skb,
     __be32 flowlabel, bool autolabel,
     struct flowi6 *fl6)
{
 u32 hash;




 flowlabel &= (( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))));

 if (flowlabel ||
     net->ipv6.sysctl.auto_flowlabels == 0 ||
     (!autolabel &&
      net->ipv6.sysctl.auto_flowlabels != 3))
  return flowlabel;

 hash = skb_get_hash_flowi6(skb, fl6);





 hash = rol32(hash, 16);

 flowlabel = ( __be32)hash & (( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))));

 if (net->ipv6.sysctl.flowlabel_state_ranges)
  flowlabel |= (( __be32)(__u32)(__builtin_constant_p((0x00080000)) ? ((__u32)( (((__u32)((0x00080000)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x00080000)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x00080000)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x00080000)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x00080000))));

 return flowlabel;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip6_default_np_autolabel(struct net *net)
{
 switch (net->ipv6.sysctl.auto_flowlabels) {
 case 0:
 case 2:
 default:
  return 0;
 case 1:
 case 3:
  return 1;
 }
}
# 1015 "../include/net/ipv6.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip6_multipath_hash_policy(const struct net *net)
{
 return net->ipv6.sysctl.multipath_hash_policy;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ip6_multipath_hash_fields(const struct net *net)
{
 return net->ipv6.sysctl.multipath_hash_fields;
}
# 1037 "../include/net/ipv6.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip6_flow_hdr(struct ipv6hdr *hdr, unsigned int tclass,
    __be32 flowlabel)
{
 *(__be32 *)hdr = (( __be32)(__u32)(__builtin_constant_p((0x60000000 | (tclass << 20))) ? ((__u32)( (((__u32)((0x60000000 | (tclass << 20))) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x60000000 | (tclass << 20))) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x60000000 | (tclass << 20))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x60000000 | (tclass << 20))) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x60000000 | (tclass << 20))))) | flowlabel;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 ip6_flowinfo(const struct ipv6hdr *hdr)
{
 return *(__be32 *)hdr & (( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 ip6_flowlabel(const struct ipv6hdr *hdr)
{
 return *(__be32 *)hdr & (( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 ip6_tclass(__be32 flowinfo)
{
 return (__u32)(__builtin_constant_p(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))))))) ? ((__u32)( (((__u32)(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))))))) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))))))) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))))))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))))))) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(flowinfo & ((( __be32)(__u32)(__builtin_constant_p((0x0FFFFFFF)) ? ((__u32)( (((__u32)((0x0FFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x0FFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x0FFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x0FFFFFFF)))) & ~(( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF)))))))) >> 20;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dscp_t ip6_dscp(__be32 flowinfo)
{
 return inet_dsfield_to_dscp(ip6_tclass(flowinfo));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 ip6_make_flowinfo(unsigned int tclass, __be32 flowlabel)
{
 return (( __be32)(__u32)(__builtin_constant_p((tclass << 20)) ? ((__u32)( (((__u32)((tclass << 20)) & (__u32)0x000000ffUL) << 24) | (((__u32)((tclass << 20)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((tclass << 20)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((tclass << 20)) & (__u32)0xff000000UL) >> 24))) : __fswab32((tclass << 20)))) | flowlabel;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 flowi6_get_flowlabel(const struct flowi6 *fl6)
{
 return fl6->flowlabel & (( __be32)(__u32)(__builtin_constant_p((0x000FFFFF)) ? ((__u32)( (((__u32)((0x000FFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x000FFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x000FFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x000FFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x000FFFFF))));
}
# 1081 "../include/net/ipv6.h"
int ipv6_rcv(struct sk_buff *skb, struct net_device *dev,
      struct packet_type *pt, struct net_device *orig_dev);
void ipv6_list_rcv(struct list_head *head, struct packet_type *pt,
     struct net_device *orig_dev);

int ip6_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb);




int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
      __u32 mark, struct ipv6_txoptions *opt, int tclass, u32 priority);

int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr);

int ip6_append_data(struct sock *sk,
      int getfrag(void *from, char *to, int offset, int len,
    int odd, struct sk_buff *skb),
      void *from, size_t length, int transhdrlen,
      struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
      struct rt6_info *rt, unsigned int flags);

int ip6_push_pending_frames(struct sock *sk);

void ip6_flush_pending_frames(struct sock *sk);

int ip6_send_skb(struct sk_buff *skb);

struct sk_buff *__ip6_make_skb(struct sock *sk, struct sk_buff_head *queue,
          struct inet_cork_full *cork,
          struct inet6_cork *v6_cork);
struct sk_buff *ip6_make_skb(struct sock *sk,
        int getfrag(void *from, char *to, int offset,
      int len, int odd, struct sk_buff *skb),
        void *from, size_t length, int transhdrlen,
        struct ipcm6_cookie *ipc6,
        struct rt6_info *rt, unsigned int flags,
        struct inet_cork_full *cork);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *ip6_finish_skb(struct sock *sk)
{
 return __ip6_make_skb(sk, &sk->sk_write_queue, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->cork,
         &inet6_sk(sk)->cork);
}

int ip6_dst_lookup(struct net *net, struct sock *sk, struct dst_entry **dst,
     struct flowi6 *fl6);
struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, struct flowi6 *fl6,
          const struct in6_addr *final_dst);
struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
      const struct in6_addr *final_dst,
      bool connected);
struct dst_entry *ip6_blackhole_route(struct net *net,
          struct dst_entry *orig_dst);





int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb);
int ip6_forward(struct sk_buff *skb);
int ip6_input(struct sk_buff *skb);
int ip6_mc_input(struct sk_buff *skb);
void ip6_protocol_deliver_rcu(struct net *net, struct sk_buff *skb, int nexthdr,
         bool have_final);

int __ip6_local_out(struct net *net, struct sock *sk, struct sk_buff *skb);
int ip6_local_out(struct net *net, struct sock *sk, struct sk_buff *skb);





void ipv6_push_nfrag_opts(struct sk_buff *skb, struct ipv6_txoptions *opt,
     u8 *proto, struct in6_addr **daddr_p,
     struct in6_addr *saddr);
void ipv6_push_frag_opts(struct sk_buff *skb, struct ipv6_txoptions *opt,
    u8 *proto);

int ipv6_skip_exthdr(const struct sk_buff *, int start, u8 *nexthdrp,
       __be16 *frag_offp);

bool ipv6_ext_hdr(u8 nexthdr);

enum {
 IP6_FH_F_FRAG = (1 << 0),
 IP6_FH_F_AUTH = (1 << 1),
 IP6_FH_F_SKIP_RH = (1 << 2),
};


int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, int target,
    unsigned short *fragoff, int *fragflg);

int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type);

struct in6_addr *fl6_update_dst(struct flowi6 *fl6,
    const struct ipv6_txoptions *opt,
    struct in6_addr *orig);




extern struct static_key_false ip6_min_hopcount;

int do_ipv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
         unsigned int optlen);
int ipv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
      unsigned int optlen);
int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
         sockptr_t optval, sockptr_t optlen);
int ipv6_getsockopt(struct sock *sk, int level, int optname,
      char *optval, int *optlen);

int __ip6_datagram_connect(struct sock *sk, struct sockaddr *addr,
      int addr_len);
int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len);
int ip6_datagram_connect_v6_only(struct sock *sk, struct sockaddr *addr,
     int addr_len);
int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr);
void ip6_datagram_release_cb(struct sock *sk);

int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
      int *addr_len);
int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len,
       int *addr_len);
void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
       u32 info, u8 *payload);
void ipv6_local_error(struct sock *sk, int err, struct flowi6 *fl6, u32 info);
void ipv6_local_rxpmtu(struct sock *sk, struct flowi6 *fl6, u32 mtu);

void inet6_cleanup_sock(struct sock *sk);
void inet6_sock_destruct(struct sock *sk);
int inet6_release(struct socket *sock);
int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len);
int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len);
int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
    int peer);
int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
int inet6_compat_ioctl(struct socket *sock, unsigned int cmd,
  unsigned long arg);

int inet6_hash_connect(struct inet_timewait_death_row *death_row,
         struct sock *sk);
int inet6_sendmsg(struct socket *sock, struct msghdr *msg, size_t size);
int inet6_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
    int flags);




extern const struct proto_ops inet6_stream_ops;
extern const struct proto_ops inet6_dgram_ops;
extern const struct proto_ops inet6_sockraw_ops;

struct group_source_req;
struct group_filter;

int ip6_mc_source(int add, int omode, struct sock *sk,
    struct group_source_req *pgsr);
int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf,
    struct __kernel_sockaddr_storage *list);
int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf,
    sockptr_t optval, size_t ss_offset);


int ac6_proc_init(struct net *net);
void ac6_proc_exit(struct net *net);
int raw6_proc_init(void);
void raw6_proc_exit(void);
int tcp6_proc_init(struct net *net);
void tcp6_proc_exit(struct net *net);
int udp6_proc_init(struct net *net);
void udp6_proc_exit(struct net *net);
int udplite6_proc_init(void);
void udplite6_proc_exit(void);
int ipv6_misc_proc_init(void);
void ipv6_misc_proc_exit(void);
int snmp6_register_dev(struct inet6_dev *idev);
int snmp6_unregister_dev(struct inet6_dev *idev);
# 1270 "../include/net/ipv6.h"
struct ctl_table *ipv6_icmp_sysctl_init(struct net *net);
size_t ipv6_icmp_sysctl_table_size(void);
struct ctl_table *ipv6_route_sysctl_init(struct net *net);
size_t ipv6_route_sysctl_table_size(struct net *net);
int ipv6_sysctl_register(void);
void ipv6_sysctl_unregister(void);


int ipv6_sock_mc_join(struct sock *sk, int ifindex,
        const struct in6_addr *addr);
int ipv6_sock_mc_join_ssm(struct sock *sk, int ifindex,
     const struct in6_addr *addr, unsigned int mode);
int ipv6_sock_mc_drop(struct sock *sk, int ifindex,
        const struct in6_addr *addr);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip6_sock_set_v6only(struct sock *sk)
{
 if (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->sk.__sk_common.skc_num)
  return -22;
 lock_sock(sk);
 sk->__sk_common.skc_ipv6only = true;
 release_sock(sk);
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip6_sock_set_recverr(struct sock *sk)
{
 set_bit(INET_FLAGS_RECVERR6, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip6_sock_set_addr_preferences(struct sock *sk, int val)
{
 unsigned int prefmask = ~(0x0001 | 0x0002 | 0x0004);
 unsigned int pref = 0;


 switch (val & (0x0002 |
         0x0001 |
         0x0100)) {
 case 0x0002:
  pref |= 0x0002;
  prefmask &= ~(0x0002 |
         0x0001);
  break;
 case 0x0001:
  pref |= 0x0001;
  prefmask &= ~(0x0002 |
         0x0001);
  break;
 case 0x0100:
  prefmask &= ~(0x0002 |
         0x0001);
  break;
 case 0:
  break;
 default:
  return -22;
 }


 switch (val & (0x0400 | 0x0004)) {
 case 0x0400:
  prefmask &= ~0x0004;
  break;
 case 0x0004:
  pref |= 0x0004;
  break;
 case 0:
  break;
 default:
  return -22;
 }


 switch (val & (0x0008|0x0800)) {
 case 0x0008:
 case 0x0800:
 case 0:
  break;
 default:
  return -22;
 }

 do { do { __attribute__((__noreturn__)) extern void __compiletime_assert_526(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet6_sk(sk)->srcprefs) == sizeof(char) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(short) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(int) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(long)) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(long long))) __compiletime_assert_526(); } while (0); do { *(volatile typeof(inet6_sk(sk)->srcprefs) *)&(inet6_sk(sk)->srcprefs) = ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_525(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet6_sk(sk)->srcprefs) == sizeof(char) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(short) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(int) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(long)) || sizeof(inet6_sk(sk)->srcprefs) == sizeof(long long))) __compiletime_assert_525(); } while (0); (*(const volatile typeof( _Generic((inet6_sk(sk)->srcprefs), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet6_sk(sk)->srcprefs))) *)&(inet6_sk(sk)->srcprefs)); }) & prefmask) | pref); } while (0); } while (0);

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip6_sock_set_recvpktinfo(struct sock *sk)
{
 lock_sock(sk);
 inet6_sk(sk)->rxopt.bits.rxinfo = true;
 release_sock(sk);
}
# 26 "../include/rdma/ib_verbs.h" 2
# 1 "../include/net/ip.h" 1
# 22 "../include/net/ip.h"
# 1 "../include/linux/ip.h" 1
# 17 "../include/linux/ip.h"
# 1 "../include/uapi/linux/ip.h" 1
# 87 "../include/uapi/linux/ip.h"
struct iphdr {

 __u8 ihl:4,
  version:4;






 __u8 tos;
 __be16 tot_len;
 __be16 id;
 __be16 frag_off;
 __u8 ttl;
 __u8 protocol;
 __sum16 check;
 union { struct { __be32 saddr; __be32 daddr; } ; struct { __be32 saddr; __be32 daddr; } addrs; } ;




};


struct ip_auth_hdr {
 __u8 nexthdr;
 __u8 hdrlen;
 __be16 reserved;
 __be32 spi;
 __be32 seq_no;
 __u8 auth_data[];
};

struct ip_esp_hdr {
 __be32 spi;
 __be32 seq_no;
 __u8 enc_data[];
};

struct ip_comp_hdr {
 __u8 nexthdr;
 __u8 flags;
 __be16 cpi;
};

struct ip_beet_phdr {
 __u8 nexthdr;
 __u8 hdrlen;
 __u8 padlen;
 __u8 reserved;
};


enum
{
 IPV4_DEVCONF_FORWARDING=1,
 IPV4_DEVCONF_MC_FORWARDING,
 IPV4_DEVCONF_PROXY_ARP,
 IPV4_DEVCONF_ACCEPT_REDIRECTS,
 IPV4_DEVCONF_SECURE_REDIRECTS,
 IPV4_DEVCONF_SEND_REDIRECTS,
 IPV4_DEVCONF_SHARED_MEDIA,
 IPV4_DEVCONF_RP_FILTER,
 IPV4_DEVCONF_ACCEPT_SOURCE_ROUTE,
 IPV4_DEVCONF_BOOTP_RELAY,
 IPV4_DEVCONF_LOG_MARTIANS,
 IPV4_DEVCONF_TAG,
 IPV4_DEVCONF_ARPFILTER,
 IPV4_DEVCONF_MEDIUM_ID,
 IPV4_DEVCONF_NOXFRM,
 IPV4_DEVCONF_NOPOLICY,
 IPV4_DEVCONF_FORCE_IGMP_VERSION,
 IPV4_DEVCONF_ARP_ANNOUNCE,
 IPV4_DEVCONF_ARP_IGNORE,
 IPV4_DEVCONF_PROMOTE_SECONDARIES,
 IPV4_DEVCONF_ARP_ACCEPT,
 IPV4_DEVCONF_ARP_NOTIFY,
 IPV4_DEVCONF_ACCEPT_LOCAL,
 IPV4_DEVCONF_SRC_VMARK,
 IPV4_DEVCONF_PROXY_ARP_PVLAN,
 IPV4_DEVCONF_ROUTE_LOCALNET,
 IPV4_DEVCONF_IGMPV2_UNSOLICITED_REPORT_INTERVAL,
 IPV4_DEVCONF_IGMPV3_UNSOLICITED_REPORT_INTERVAL,
 IPV4_DEVCONF_IGNORE_ROUTES_WITH_LINKDOWN,
 IPV4_DEVCONF_DROP_UNICAST_IN_L2_MULTICAST,
 IPV4_DEVCONF_DROP_GRATUITOUS_ARP,
 IPV4_DEVCONF_BC_FORWARDING,
 IPV4_DEVCONF_ARP_EVICT_NOCARRIER,
 __IPV4_DEVCONF_MAX
};
# 18 "../include/linux/ip.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct iphdr *ip_hdr(const struct sk_buff *skb)
{
 return (struct iphdr *)skb_network_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct iphdr *inner_ip_hdr(const struct sk_buff *skb)
{
 return (struct iphdr *)skb_inner_network_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct iphdr *ipip_hdr(const struct sk_buff *skb)
{
 return (struct iphdr *)skb_transport_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ip_transport_len(const struct sk_buff *skb)
{
 return (__u16)(__builtin_constant_p(( __u16)(__be16)(ip_hdr(skb)->tot_len)) ? ((__u16)( (((__u16)(( __u16)(__be16)(ip_hdr(skb)->tot_len)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(ip_hdr(skb)->tot_len)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(ip_hdr(skb)->tot_len))) - skb_network_header_len(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int iph_totlen(const struct sk_buff *skb, const struct iphdr *iph)
{
 u32 len = (__u16)(__builtin_constant_p(( __u16)(__be16)(iph->tot_len)) ? ((__u16)( (((__u16)(( __u16)(__be16)(iph->tot_len)) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)(iph->tot_len)) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)(iph->tot_len)));

 return (len || !skb_is_gso(skb) || !skb_is_gso_tcp(skb)) ?
        len : skb->len - skb_network_offset(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int skb_ip_totlen(const struct sk_buff *skb)
{
 return iph_totlen(skb, ip_hdr(skb));
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iph_set_totlen(struct iphdr *iph, unsigned int len)
{
 iph->tot_len = len <= 0xFFFFU ? (( __be16)(__u16)(__builtin_constant_p((len)) ? ((__u16)( (((__u16)((len)) & (__u16)0x00ffU) << 8) | (((__u16)((len)) & (__u16)0xff00U) >> 8))) : __fswab16((len)))) : 0;
}
# 23 "../include/net/ip.h" 2




# 1 "../include/linux/static_key.h" 1
# 28 "../include/net/ip.h" 2


# 1 "../include/net/route.h" 1
# 24 "../include/net/route.h"
# 1 "../include/net/inetpeer.h" 1
# 20 "../include/net/inetpeer.h"
struct ipv4_addr_key {
 __be32 addr;
 int vif;
};



struct inetpeer_addr {
 union {
  struct ipv4_addr_key a4;
  struct in6_addr a6;
  u32 key[(sizeof(struct in6_addr) / sizeof(u32))];
 };
 __u16 family;
};

struct inet_peer {
 struct rb_node rb_node;
 struct inetpeer_addr daddr;

 u32 metrics[(__RTAX_MAX - 1)];
 u32 rate_tokens;
 u32 n_redirects;
 unsigned long rate_last;





 union {
  struct {
   atomic_t rid;
  };
  struct callback_head rcu;
 };


 __u32 dtime;
 refcount_t refcnt;
};

struct inet_peer_base {
 struct rb_root rb_root;
 seqlock_t lock;
 int total;
};

void inet_peer_base_init(struct inet_peer_base *);

void inet_initpeers(void) __attribute__((__section__(".init.text"))) __attribute__((__cold__)) ;



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inetpeer_set_addr_v4(struct inetpeer_addr *iaddr, __be32 ip)
{
 iaddr->a4.addr = ip;
 iaddr->a4.vif = 0;
 iaddr->family = 2;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 inetpeer_get_addr_v4(struct inetpeer_addr *iaddr)
{
 return iaddr->a4.addr;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inetpeer_set_addr_v6(struct inetpeer_addr *iaddr,
     struct in6_addr *in6)
{
 iaddr->a6 = *in6;
 iaddr->family = 10;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct in6_addr *inetpeer_get_addr_v6(struct inetpeer_addr *iaddr)
{
 return &iaddr->a6;
}


struct inet_peer *inet_getpeer(struct inet_peer_base *base,
          const struct inetpeer_addr *daddr,
          int create);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inet_peer *inet_getpeer_v4(struct inet_peer_base *base,
      __be32 v4daddr,
      int vif, int create)
{
 struct inetpeer_addr daddr;

 daddr.a4.addr = v4daddr;
 daddr.a4.vif = vif;
 daddr.family = 2;
 return inet_getpeer(base, &daddr, create);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct inet_peer *inet_getpeer_v6(struct inet_peer_base *base,
      const struct in6_addr *v6daddr,
      int create)
{
 struct inetpeer_addr daddr;

 daddr.a6 = *v6daddr;
 daddr.family = 10;
 return inet_getpeer(base, &daddr, create);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inetpeer_addr_cmp(const struct inetpeer_addr *a,
        const struct inetpeer_addr *b)
{
 int i, n;

 if (a->family == 2)
  n = sizeof(a->a4) / sizeof(u32);
 else
  n = sizeof(a->a6) / sizeof(u32);

 for (i = 0; i < n; i++) {
  if (a->key[i] == b->key[i])
   continue;
  if (a->key[i] < b->key[i])
   return -1;
  return 1;
 }

 return 0;
}


void inet_putpeer(struct inet_peer *p);
bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout);

void inetpeer_invalidate_tree(struct inet_peer_base *);
# 25 "../include/net/route.h" 2


# 1 "../include/net/ip_fib.h" 1
# 26 "../include/net/ip_fib.h"
struct fib_config {
 u8 fc_dst_len;
 dscp_t fc_dscp;
 u8 fc_protocol;
 u8 fc_scope;
 u8 fc_type;
 u8 fc_gw_family;

 u32 fc_table;
 __be32 fc_dst;
 union {
  __be32 fc_gw4;
  struct in6_addr fc_gw6;
 };
 int fc_oif;
 u32 fc_flags;
 u32 fc_priority;
 __be32 fc_prefsrc;
 u32 fc_nh_id;
 struct nlattr *fc_mx;
 struct rtnexthop *fc_mp;
 int fc_mx_len;
 int fc_mp_len;
 u32 fc_flow;
 u32 fc_nlflags;
 struct nl_info fc_nlinfo;
 struct nlattr *fc_encap;
 u16 fc_encap_type;
};

struct fib_info;
struct rtable;

struct fib_nh_exception {
 struct fib_nh_exception *fnhe_next;
 int fnhe_genid;
 __be32 fnhe_daddr;
 u32 fnhe_pmtu;
 bool fnhe_mtu_locked;
 __be32 fnhe_gw;
 unsigned long fnhe_expires;
 struct rtable *fnhe_rth_input;
 struct rtable *fnhe_rth_output;
 unsigned long fnhe_stamp;
 struct callback_head rcu;
};

struct fnhe_hash_bucket {
 struct fib_nh_exception *chain;
};





struct fib_nh_common {
 struct net_device *nhc_dev;
 netdevice_tracker nhc_dev_tracker;
 int nhc_oif;
 unsigned char nhc_scope;
 u8 nhc_family;
 u8 nhc_gw_family;
 unsigned char nhc_flags;
 struct lwtunnel_state *nhc_lwtstate;

 union {
  __be32 ipv4;
  struct in6_addr ipv6;
 } nhc_gw;

 int nhc_weight;
 atomic_t nhc_upper_bound;


 struct rtable * *nhc_pcpu_rth_output;
 struct rtable *nhc_rth_input;
 struct fnhe_hash_bucket *nhc_exceptions;
};

struct fib_nh {
 struct fib_nh_common nh_common;
 struct hlist_node nh_hash;
 struct fib_info *nh_parent;

 __u32 nh_tclassid;

 __be32 nh_saddr;
 int nh_saddr_genid;
# 126 "../include/net/ip_fib.h"
};





struct nexthop;

struct fib_info {
 struct hlist_node fib_hash;
 struct hlist_node fib_lhash;
 struct list_head nh_list;
 struct net *fib_net;
 refcount_t fib_treeref;
 refcount_t fib_clntref;
 unsigned int fib_flags;
 unsigned char fib_dead;
 unsigned char fib_protocol;
 unsigned char fib_scope;
 unsigned char fib_type;
 __be32 fib_prefsrc;
 u32 fib_tb_id;
 u32 fib_priority;
 struct dst_metrics *fib_metrics;




 int fib_nhs;
 bool fib_nh_is_v6;
 bool nh_updated;
 bool pfsrc_removed;
 struct nexthop *nh;
 struct callback_head rcu;
 struct fib_nh fib_nh[] __attribute__((__counted_by__(fib_nhs)));
};






struct fib_table;
struct fib_result {
 __be32 prefix;
 unsigned char prefixlen;
 unsigned char nh_sel;
 unsigned char type;
 unsigned char scope;
 u32 tclassid;
 dscp_t dscp;
 struct fib_nh_common *nhc;
 struct fib_info *fi;
 struct fib_table *table;
 struct hlist_head *fa_head;
};

struct fib_result_nl {
 __be32 fl_addr;
 u32 fl_mark;
 unsigned char fl_tos;
 unsigned char fl_scope;
 unsigned char tb_id_in;

 unsigned char tb_id;
 unsigned char prefixlen;
 unsigned char nh_sel;
 unsigned char type;
 unsigned char scope;
 int err;
};







__be32 fib_info_update_nhc_saddr(struct net *net, struct fib_nh_common *nhc,
     unsigned char scope);
__be32 fib_result_prefsrc(struct net *net, struct fib_result *res);





struct fib_rt_info {
 struct fib_info *fi;
 u32 tb_id;
 __be32 dst;
 int dst_len;
 dscp_t dscp;
 u8 type;
 u8 offload:1,
    trap:1,
    offload_failed:1,
    unused:5;
};

struct fib_entry_notifier_info {
 struct fib_notifier_info info;
 u32 dst;
 int dst_len;
 struct fib_info *fi;
 dscp_t dscp;
 u8 type;
 u32 tb_id;
};

struct fib_nh_notifier_info {
 struct fib_notifier_info info;
 struct fib_nh *fib_nh;
};

int call_fib4_notifier(struct notifier_block *nb,
         enum fib_event_type event_type,
         struct fib_notifier_info *info);
int call_fib4_notifiers(struct net *net, enum fib_event_type event_type,
   struct fib_notifier_info *info);

int fib4_notifier_init(struct net *net);
void fib4_notifier_exit(struct net *net);

void fib_info_notify_update(struct net *net, struct nl_info *info);
int fib_notify(struct net *net, struct notifier_block *nb,
        struct netlink_ext_ack *extack);

struct fib_table {
 struct hlist_node tb_hlist;
 u32 tb_id;
 int tb_num_default;
 struct callback_head rcu;
 unsigned long *tb_data;
 unsigned long __data[];
};

struct fib_dump_filter {
 u32 table_id;

 bool filter_set;
 bool dump_routes;
 bool dump_exceptions;
 bool rtnl_held;
 unsigned char protocol;
 unsigned char rt_type;
 unsigned int flags;
 struct net_device *dev;
};

int fib_table_lookup(struct fib_table *tb, const struct flowi4 *flp,
       struct fib_result *res, int fib_flags);
int fib_table_insert(struct net *, struct fib_table *, struct fib_config *,
       struct netlink_ext_ack *extack);
int fib_table_delete(struct net *, struct fib_table *, struct fib_config *,
       struct netlink_ext_ack *extack);
int fib_table_dump(struct fib_table *table, struct sk_buff *skb,
     struct netlink_callback *cb, struct fib_dump_filter *filter);
int fib_table_flush(struct net *net, struct fib_table *table, bool flush_all);
struct fib_table *fib_trie_unmerge(struct fib_table *main_tb);
void fib_table_flush_external(struct fib_table *table);
void fib_free_table(struct fib_table *tb);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fib_table *fib_get_table(struct net *net, u32 id)
{
 struct hlist_node *tb_hlist;
 struct hlist_head *ptr;

 ptr = id == RT_TABLE_LOCAL ?
  &net->ipv4.fib_table_hash[(RT_TABLE_LOCAL & (2 - 1))] :
  &net->ipv4.fib_table_hash[(RT_TABLE_MAIN & (2 - 1))];

 tb_hlist = ({ typeof(*((*((struct hlist_node **)(&(ptr)->first))))) *__UNIQUE_ID_rcu527 = (typeof(*((*((struct hlist_node **)(&(ptr)->first))))) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_528(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(((*((struct hlist_node **)(&(ptr)->first))))) == sizeof(char) || sizeof(((*((struct hlist_node **)(&(ptr)->first))))) == sizeof(short) || sizeof(((*((struct hlist_node **)(&(ptr)->first))))) == sizeof(int) || sizeof(((*((struct hlist_node **)(&(ptr)->first))))) == sizeof(long)) || sizeof(((*((struct hlist_node **)(&(ptr)->first))))) == sizeof(long long))) __compiletime_assert_528(); } while (0); (*(const volatile typeof( _Generic((((*((struct hlist_node **)(&(ptr)->first))))), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (((*((struct hlist_node **)(&(ptr)->first))))))) *)&(((*((struct hlist_node **)(&(ptr)->first)))))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((lockdep_rtnl_is_held()) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/ip_fib.h", 302, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*((*((struct hlist_node **)(&(ptr)->first))))) *)(__UNIQUE_ID_rcu527)); });

 return ({ void *__mptr = (void *)(tb_hlist); _Static_assert(__builtin_types_compatible_p(typeof(*(tb_hlist)), typeof(((struct fib_table *)0)->tb_hlist)) || __builtin_types_compatible_p(typeof(*(tb_hlist)), typeof(void)), "pointer type mismatch in container_of()"); ((struct fib_table *)(__mptr - __builtin_offsetof(struct fib_table, tb_hlist))); });
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct fib_table *fib_new_table(struct net *net, u32 id)
{
 return fib_get_table(net, id);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fib_lookup(struct net *net, const struct flowi4 *flp,
        struct fib_result *res, unsigned int flags)
{
 struct fib_table *tb;
 int err = -101;

 rcu_read_lock();

 tb = fib_get_table(net, RT_TABLE_MAIN);
 if (tb)
  err = fib_table_lookup(tb, flp, res, flags | 1);

 if (err == -11)
  err = -101;

 rcu_read_unlock();

 return err;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib4_has_custom_rules(const struct net *net)
{
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib4_rule_default(const struct fib_rule *rule)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fib4_rules_dump(struct net *net, struct notifier_block *nb,
      struct netlink_ext_ack *extack)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int fib4_rules_seq_read(struct net *net)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool fib4_rules_early_flow_dissect(struct net *net,
       struct sk_buff *skb,
       struct flowi4 *fl4,
       struct flow_keys *flkeys)
{
 return false;
}
# 438 "../include/net/ip_fib.h"
extern const struct nla_policy rtm_ipv4_policy[];
void ip_fib_init(void);
int fib_gw_from_via(struct fib_config *cfg, struct nlattr *nla,
      struct netlink_ext_ack *extack);
__be32 fib_compute_spec_dst(struct sk_buff *skb);
bool fib_info_nh_uses_dev(struct fib_info *fi, const struct net_device *dev);
int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
   u8 tos, int oif, struct net_device *dev,
   struct in_device *idev, u32 *itag);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int fib_num_tclassid_users(struct net *net)
{
 return atomic_read(&net->ipv4.fib_num_tclassid_users);
}






int fib_unmerge(struct net *net);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool nhc_l3mdev_matches_dev(const struct fib_nh_common *nhc,
const struct net_device *dev)
{
 if (nhc->nhc_dev == dev ||
     l3mdev_master_ifindex_rcu(nhc->nhc_dev) == dev->ifindex)
  return true;

 return false;
}


int ip_fib_check_default(__be32 gw, struct net_device *dev);
int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
int fib_sync_down_addr(struct net_device *dev, __be32 local);
int fib_sync_up(struct net_device *dev, unsigned char nh_flags);
void fib_sync_mtu(struct net_device *dev, u32 orig_mtu);
void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig);
# 546 "../include/net/ip_fib.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 fib_multipath_hash_from_keys(const struct net *net,
            struct flow_keys *keys)
{
 return flow_hash_from_keys(keys);
}


int fib_check_nh(struct net *net, struct fib_nh *nh, u32 table, u8 scope,
   struct netlink_ext_ack *extack);
void fib_select_multipath(struct fib_result *res, int hash);
void fib_select_path(struct net *net, struct fib_result *res,
       struct flowi4 *fl4, const struct sk_buff *skb);

int fib_nh_init(struct net *net, struct fib_nh *fib_nh,
  struct fib_config *cfg, int nh_weight,
  struct netlink_ext_ack *extack);
void fib_nh_release(struct net *net, struct fib_nh *fib_nh);
int fib_nh_common_init(struct net *net, struct fib_nh_common *nhc,
         struct nlattr *fc_encap, u16 fc_encap_type,
         void *cfg, gfp_t gfp_flags,
         struct netlink_ext_ack *extack);
void fib_nh_common_release(struct fib_nh_common *nhc);


void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri);
void fib_trie_init(void);
struct fib_table *fib_trie_table(u32 id, struct fib_table *alias);
bool fib_lookup_good_nhc(const struct fib_nh_common *nhc, int fib_flags,
    const struct flowi4 *flp);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fib_combine_itag(u32 *itag, const struct fib_result *res)
{

 struct fib_nh_common *nhc = res->nhc;



 if (nhc->nhc_family == 2) {
  struct fib_nh *nh;

  nh = ({ void *__mptr = (void *)(nhc); _Static_assert(__builtin_types_compatible_p(typeof(*(nhc)), typeof(((struct fib_nh *)0)->nh_common)) || __builtin_types_compatible_p(typeof(*(nhc)), typeof(void)), "pointer type mismatch in container_of()"); ((struct fib_nh *)(__mptr - __builtin_offsetof(struct fib_nh, nh_common))); });
  *itag = nh->nh_tclassid << 16;
 } else {
  *itag = 0;
 }
# 599 "../include/net/ip_fib.h"
}

void fib_flush(struct net *net);
void free_fib_info(struct fib_info *fi);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fib_info_hold(struct fib_info *fi)
{
 refcount_inc(&fi->fib_clntref);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void fib_info_put(struct fib_info *fi)
{
 if (refcount_dec_and_test(&fi->fib_clntref))
  free_fib_info(fi);
}


int fib_proc_init(struct net *net);
void fib_proc_exit(struct net *net);
# 628 "../include/net/ip_fib.h"
u32 ip_mtu_from_fib_result(struct fib_result *res, __be32 daddr);

int ip_valid_fib_dump_req(struct net *net, const struct nlmsghdr *nlh,
     struct fib_dump_filter *filter,
     struct netlink_callback *cb);

int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh,
       u8 rt_family, unsigned char *flags, bool skip_oif);
int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh,
      int nh_weight, u8 rt_family, u32 nh_tclassid);
# 28 "../include/net/route.h" 2
# 1 "../include/net/arp.h" 1





# 1 "../include/linux/if_arp.h" 1
# 23 "../include/linux/if_arp.h"
# 1 "../include/uapi/linux/if_arp.h" 1
# 117 "../include/uapi/linux/if_arp.h"
struct arpreq {
 struct sockaddr arp_pa;
 struct sockaddr arp_ha;
 int arp_flags;
 struct sockaddr arp_netmask;
 char arp_dev[16];
};

struct arpreq_old {
 struct sockaddr arp_pa;
 struct sockaddr arp_ha;
 int arp_flags;
 struct sockaddr arp_netmask;
};
# 145 "../include/uapi/linux/if_arp.h"
struct arphdr {
 __be16 ar_hrd;
 __be16 ar_pro;
 unsigned char ar_hln;
 unsigned char ar_pln;
 __be16 ar_op;
# 162 "../include/uapi/linux/if_arp.h"
};
# 24 "../include/linux/if_arp.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct arphdr *arp_hdr(const struct sk_buff *skb)
{
 return (struct arphdr *)skb_network_header(skb);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int arp_hdr_len(const struct net_device *dev)
{
 switch (dev->type) {





 default:

  return sizeof(struct arphdr) + (dev->addr_len + sizeof(u32)) * 2;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool dev_is_mac_header_xmit(const struct net_device *dev)
{
 switch (dev->type) {
 case 768:
 case 769:
 case 776:
 case 778:
 case 823:
 case 0xFFFF:
 case 0xFFFE:
 case 519:
 case 779:



 case 512:
  return false;
 default:
  return true;
 }
}
# 7 "../include/net/arp.h" 2




extern struct neigh_table arp_tbl;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 arp_hashfn(const void *pkey, const struct net_device *dev, u32 *hash_rnd)
{
 u32 key = *(const u32 *)pkey;
 u32 val = key ^ hash32_ptr(dev);

 return val * hash_rnd[0];
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key)
{
 if (dev->flags & (IFF_LOOPBACK | IFF_POINTOPOINT))
  key = ((unsigned long int) 0x00000000);

 return ___neigh_lookup_noref(&arp_tbl, neigh_key_eq32, arp_hashfn, &key, dev);
}
# 37 "../include/net/arp.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key)
{
 struct neighbour *n;

 rcu_read_lock();
 n = __ipv4_neigh_lookup_noref(dev, key);
 if (n && !refcount_inc_not_zero(&n->refcnt))
  n = ((void *)0);
 rcu_read_unlock();

 return n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __ipv4_confirm_neigh(struct net_device *dev, u32 key)
{
 struct neighbour *n;

 rcu_read_lock();
 n = __ipv4_neigh_lookup_noref(dev, key);
 neigh_confirm(n);
 rcu_read_unlock();
}

void arp_init(void);
int arp_ioctl(struct net *net, unsigned int cmd, void *arg);
void arp_send(int type, int ptype, __be32 dest_ip,
       struct net_device *dev, __be32 src_ip,
       const unsigned char *dest_hw,
       const unsigned char *src_hw, const unsigned char *th);
int arp_mc_map(__be32 addr, u8 *haddr, struct net_device *dev, int dir);
void arp_ifdown(struct net_device *dev);
int arp_invalidate(struct net_device *dev, __be32 ip, bool force);

struct sk_buff *arp_create(int type, int ptype, __be32 dest_ip,
      struct net_device *dev, __be32 src_ip,
      const unsigned char *dest_hw,
      const unsigned char *src_hw,
      const unsigned char *target_hw);
void arp_xmit(struct sk_buff *skb);
# 29 "../include/net/route.h" 2
# 1 "../include/net/ndisc.h" 1




# 1 "../include/net/ipv6_stubs.h" 1
# 15 "../include/net/ipv6_stubs.h"
struct fib6_info;
struct fib6_nh;
struct fib6_config;
struct fib6_result;




struct ipv6_stub {
 int (*ipv6_sock_mc_join)(struct sock *sk, int ifindex,
     const struct in6_addr *addr);
 int (*ipv6_sock_mc_drop)(struct sock *sk, int ifindex,
     const struct in6_addr *addr);
 struct dst_entry *(*ipv6_dst_lookup_flow)(struct net *net,
        const struct sock *sk,
        struct flowi6 *fl6,
        const struct in6_addr *final_dst);
 int (*ipv6_route_input)(struct sk_buff *skb);

 struct fib6_table *(*fib6_get_table)(struct net *net, u32 id);
 int (*fib6_lookup)(struct net *net, int oif, struct flowi6 *fl6,
      struct fib6_result *res, int flags);
 int (*fib6_table_lookup)(struct net *net, struct fib6_table *table,
     int oif, struct flowi6 *fl6,
     struct fib6_result *res, int flags);
 void (*fib6_select_path)(const struct net *net, struct fib6_result *res,
     struct flowi6 *fl6, int oif, bool oif_match,
     const struct sk_buff *skb, int strict);
 u32 (*ip6_mtu_from_fib6)(const struct fib6_result *res,
     const struct in6_addr *daddr,
     const struct in6_addr *saddr);

 int (*fib6_nh_init)(struct net *net, struct fib6_nh *fib6_nh,
       struct fib6_config *cfg, gfp_t gfp_flags,
       struct netlink_ext_ack *extack);
 void (*fib6_nh_release)(struct fib6_nh *fib6_nh);
 void (*fib6_nh_release_dsts)(struct fib6_nh *fib6_nh);
 void (*fib6_update_sernum)(struct net *net, struct fib6_info *rt);
 int (*ip6_del_rt)(struct net *net, struct fib6_info *rt, bool skip_notify);
 void (*fib6_rt_update)(struct net *net, struct fib6_info *rt,
          struct nl_info *info);

 void (*udpv6_encap_enable)(void);
 void (*ndisc_send_na)(struct net_device *dev, const struct in6_addr *daddr,
         const struct in6_addr *solicited_addr,
         bool router, bool solicited, bool override, bool inc_opt);

 void (*xfrm6_local_rxpmtu)(struct sk_buff *skb, u32 mtu);
 int (*xfrm6_udp_encap_rcv)(struct sock *sk, struct sk_buff *skb);
 struct sk_buff *(*xfrm6_gro_udp_encap_rcv)(struct sock *sk,
         struct list_head *head,
         struct sk_buff *skb);
 int (*xfrm6_rcv_encap)(struct sk_buff *skb, int nexthdr, __be32 spi,
          int encap_type);

 struct neigh_table *nd_tbl;

 int (*ipv6_fragment)(struct net *net, struct sock *sk, struct sk_buff *skb,
        int (*output)(struct net *, struct sock *, struct sk_buff *));
 struct net_device *(*ipv6_dev_find)(struct net *net, const struct in6_addr *addr,
         struct net_device *dev);
 int (*ip6_xmit)(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
   __u32 mark, struct ipv6_txoptions *opt, int tclass, u32 priority);
};
extern const struct ipv6_stub *ipv6_stub ;


struct ipv6_bpf_stub {
 int (*inet6_bind)(struct sock *sk, struct sockaddr *uaddr, int addr_len,
     u32 flags);
 struct sock *(*udp6_lib_lookup)(struct net *net,
         const struct in6_addr *saddr, __be16 sport,
         const struct in6_addr *daddr, __be16 dport,
         int dif, int sdif, struct udp_table *tbl,
         struct sk_buff *skb);
 int (*ipv6_setsockopt)(struct sock *sk, int level, int optname,
          sockptr_t optval, unsigned int optlen);
 int (*ipv6_getsockopt)(struct sock *sk, int level, int optname,
          sockptr_t optval, sockptr_t optlen);
 int (*ipv6_dev_get_saddr)(struct net *net,
      const struct net_device *dst_dev,
      const struct in6_addr *daddr,
      unsigned int prefs,
      struct in6_addr *saddr);
};
extern const struct ipv6_bpf_stub *ipv6_bpf_stub ;
# 6 "../include/net/ndisc.h" 2
# 30 "../include/net/ndisc.h"
enum {
 __ND_OPT_PREFIX_INFO_END = 0,
 ND_OPT_SOURCE_LL_ADDR = 1,
 ND_OPT_TARGET_LL_ADDR = 2,
 ND_OPT_PREFIX_INFO = 3,
 ND_OPT_REDIRECT_HDR = 4,
 ND_OPT_MTU = 5,
 ND_OPT_NONCE = 14,
 __ND_OPT_ARRAY_MAX,
 ND_OPT_ROUTE_INFO = 24,
 ND_OPT_RDNSS = 25,
 ND_OPT_DNSSL = 31,
 ND_OPT_6CO = 34,
 ND_OPT_CAPTIVE_PORTAL = 37,
 ND_OPT_PREF64 = 38,
 __ND_OPT_MAX
};







# 1 "../include/linux/icmpv6.h" 1








static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb)
{
 return (struct icmp6hdr *)skb_transport_header(skb);
}





typedef void ip6_icmp_send_t(struct sk_buff *skb, u8 type, u8 code, __u32 info,
        const struct in6_addr *force_saddr,
        const struct inet6_skb_parm *parm);
void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
  const struct in6_addr *force_saddr,
  const struct inet6_skb_parm *parm);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
     const struct inet6_skb_parm *parm)
{
 icmp6_send(skb, type, code, info, ((void *)0), parm);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet6_register_icmp_sender(ip6_icmp_send_t *fn)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_529(void) __attribute__((__error__("BUILD_BUG_ON failed: " "fn != icmp6_send"))); if (!(!(fn != icmp6_send))) __compiletime_assert_529(); } while (0);
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_530(void) __attribute__((__error__("BUILD_BUG_ON failed: " "fn != icmp6_send"))); if (!(!(fn != icmp6_send))) __compiletime_assert_530(); } while (0);
 return 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
{
 __icmpv6_send(skb, type, code, info, ((struct inet6_skb_parm*)((skb)->cb)));
}

int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
          unsigned int data_len);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
{
 struct inet6_skb_parm parm = { 0 };
 __icmpv6_send(skb_in, type, code, info, &parm);
}
# 78 "../include/linux/icmpv6.h"
extern int icmpv6_init(void);
extern int icmpv6_err_convert(u8 type, u8 code,
          int *err);
extern void icmpv6_cleanup(void);
extern void icmpv6_param_prob_reason(struct sk_buff *skb,
         u8 code, int pos,
         enum skb_drop_reason reason);

struct flowi6;
struct in6_addr;

void icmpv6_flow_init(const struct sock *sk, struct flowi6 *fl6, u8 type,
        const struct in6_addr *saddr,
        const struct in6_addr *daddr, int oif);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void icmpv6_param_prob(struct sk_buff *skb, u8 code, int pos)
{
 icmpv6_param_prob_reason(skb, code, pos,
     SKB_DROP_REASON_NOT_SPECIFIED);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool icmpv6_is_err(int type)
{
 switch (type) {
 case 1:
 case 2:
 case 3:
 case 4:
  return true;
 }

 return false;
}
# 55 "../include/net/ndisc.h" 2
# 72 "../include/net/ndisc.h"
struct ctl_table;
struct inet6_dev;
struct net_device;
struct net_proto_family;
struct sk_buff;
struct prefix_info;

extern struct neigh_table nd_tbl;

struct nd_msg {
        struct icmp6hdr icmph;
        struct in6_addr target;
 __u8 opt[];
};

struct rs_msg {
 struct icmp6hdr icmph;
 __u8 opt[];
};

struct ra_msg {
        struct icmp6hdr icmph;
 __be32 reachable_time;
 __be32 retrans_timer;
};

struct rd_msg {
 struct icmp6hdr icmph;
 struct in6_addr target;
 struct in6_addr dest;
 __u8 opt[];
};

struct nd_opt_hdr {
 __u8 nd_opt_type;
 __u8 nd_opt_len;
} __attribute__((__packed__));


struct ndisc_options {
 struct nd_opt_hdr *nd_opt_array[__ND_OPT_ARRAY_MAX];




 struct nd_opt_hdr *nd_useropts;
 struct nd_opt_hdr *nd_useropts_end;



};
# 136 "../include/net/ndisc.h"
struct ndisc_options *ndisc_parse_options(const struct net_device *dev,
       u8 *opt, int opt_len,
       struct ndisc_options *ndopts);

void __ndisc_fill_addr_option(struct sk_buff *skb, int type, const void *data,
         int data_len, int pad);
# 202 "../include/net/ndisc.h"
struct ndisc_ops {
 int (*is_useropt)(u8 nd_opt_type);
 int (*parse_options)(const struct net_device *dev,
     struct nd_opt_hdr *nd_opt,
     struct ndisc_options *ndopts);
 void (*update)(const struct net_device *dev, struct neighbour *n,
     u32 flags, u8 icmp6_type,
     const struct ndisc_options *ndopts);
 int (*opt_addr_space)(const struct net_device *dev, u8 icmp6_type,
      struct neighbour *neigh, u8 *ha_buf,
      u8 **ha);
 void (*fill_addr_option)(const struct net_device *dev,
        struct sk_buff *skb, u8 icmp6_type,
        const u8 *ha);
 void (*prefix_rcv_add_addr)(struct net *net, struct net_device *dev,
           const struct prefix_info *pinfo,
           struct inet6_dev *in6_dev,
           struct in6_addr *addr,
           int addr_type, u32 addr_flags,
           bool sllao, bool tokenized,
           __u32 valid_lft, u32 prefered_lft,
           bool dev_addr_generated);
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_ops_is_useropt(const struct net_device *dev,
           u8 nd_opt_type)
{
 if (dev->ndisc_ops && dev->ndisc_ops->is_useropt)
  return dev->ndisc_ops->is_useropt(nd_opt_type);
 else
  return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_ops_parse_options(const struct net_device *dev,
       struct nd_opt_hdr *nd_opt,
       struct ndisc_options *ndopts)
{
 if (dev->ndisc_ops && dev->ndisc_ops->parse_options)
  return dev->ndisc_ops->parse_options(dev, nd_opt, ndopts);
 else
  return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ndisc_ops_update(const struct net_device *dev,
       struct neighbour *n, u32 flags,
       u8 icmp6_type,
       const struct ndisc_options *ndopts)
{
 if (dev->ndisc_ops && dev->ndisc_ops->update)
  dev->ndisc_ops->update(dev, n, flags, icmp6_type, ndopts);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_ops_opt_addr_space(const struct net_device *dev,
        u8 icmp6_type)
{
 if (dev->ndisc_ops && dev->ndisc_ops->opt_addr_space &&
     icmp6_type != 137)
  return dev->ndisc_ops->opt_addr_space(dev, icmp6_type, ((void *)0),
            ((void *)0), ((void *)0));
 else
  return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_ops_redirect_opt_addr_space(const struct net_device *dev,
          struct neighbour *neigh,
          u8 *ha_buf, u8 **ha)
{
 if (dev->ndisc_ops && dev->ndisc_ops->opt_addr_space)
  return dev->ndisc_ops->opt_addr_space(dev, 137,
            neigh, ha_buf, ha);
 else
  return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ndisc_ops_fill_addr_option(const struct net_device *dev,
           struct sk_buff *skb,
           u8 icmp6_type)
{
 if (dev->ndisc_ops && dev->ndisc_ops->fill_addr_option &&
     icmp6_type != 137)
  dev->ndisc_ops->fill_addr_option(dev, skb, icmp6_type, ((void *)0));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ndisc_ops_fill_redirect_addr_option(const struct net_device *dev,
             struct sk_buff *skb,
             const u8 *ha)
{
 if (dev->ndisc_ops && dev->ndisc_ops->fill_addr_option)
  dev->ndisc_ops->fill_addr_option(dev, skb, 137, ha);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ndisc_ops_prefix_rcv_add_addr(struct net *net,
       struct net_device *dev,
       const struct prefix_info *pinfo,
       struct inet6_dev *in6_dev,
       struct in6_addr *addr,
       int addr_type, u32 addr_flags,
       bool sllao, bool tokenized,
       __u32 valid_lft,
       u32 prefered_lft,
       bool dev_addr_generated)
{
 if (dev->ndisc_ops && dev->ndisc_ops->prefix_rcv_add_addr)
  dev->ndisc_ops->prefix_rcv_add_addr(net, dev, pinfo, in6_dev,
          addr, addr_type,
          addr_flags, sllao,
          tokenized, valid_lft,
          prefered_lft,
          dev_addr_generated);
}
# 321 "../include/net/ndisc.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_addr_option_pad(unsigned short type)
{
 switch (type) {
 case 32: return 2;
 default: return 0;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int __ndisc_opt_addr_space(unsigned char addr_len, int pad)
{
 return (((addr_len + pad)+2+7)&~7);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_opt_addr_space(struct net_device *dev, u8 icmp6_type)
{
 return __ndisc_opt_addr_space(dev->addr_len,
          ndisc_addr_option_pad(dev->type)) +
  ndisc_ops_opt_addr_space(dev, icmp6_type);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ndisc_redirect_opt_addr_space(struct net_device *dev,
      struct neighbour *neigh,
      u8 *ops_data_buf,
      u8 **ops_data)
{
 return __ndisc_opt_addr_space(dev->addr_len,
          ndisc_addr_option_pad(dev->type)) +
  ndisc_ops_redirect_opt_addr_space(dev, neigh, ops_data_buf,
        ops_data);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 *__ndisc_opt_addr_data(struct nd_opt_hdr *p,
     unsigned char addr_len, int prepad)
{
 u8 *lladdr = (u8 *)(p + 1);
 int lladdrlen = p->nd_opt_len << 3;
 if (lladdrlen != __ndisc_opt_addr_space(addr_len, prepad))
  return ((void *)0);
 return lladdr + prepad;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 *ndisc_opt_addr_data(struct nd_opt_hdr *p,
          struct net_device *dev)
{
 return __ndisc_opt_addr_data(p, dev->addr_len,
         ndisc_addr_option_pad(dev->type));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ndisc_hashfn(const void *pkey, const struct net_device *dev, __u32 *hash_rnd)
{
 const u32 *p32 = pkey;

 return (((p32[0] ^ hash32_ptr(dev)) * hash_rnd[0]) +
  (p32[1] * hash_rnd[1]) +
  (p32[2] * hash_rnd[2]) +
  (p32[3] * hash_rnd[3]));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *__ipv6_neigh_lookup_noref(struct net_device *dev, const void *pkey)
{
 return ___neigh_lookup_noref(&nd_tbl, neigh_key_eq128, ndisc_hashfn, pkey, dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct neighbour *__ipv6_neigh_lookup_noref_stub(struct net_device *dev,
       const void *pkey)
{
 return ___neigh_lookup_noref(ipv6_stub->nd_tbl, neigh_key_eq128,
         ndisc_hashfn, pkey, dev);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *__ipv6_neigh_lookup(struct net_device *dev, const void *pkey)
{
 struct neighbour *n;

 rcu_read_lock();
 n = __ipv6_neigh_lookup_noref(dev, pkey);
 if (n && !refcount_inc_not_zero(&n->refcnt))
  n = ((void *)0);
 rcu_read_unlock();

 return n;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __ipv6_confirm_neigh(struct net_device *dev,
     const void *pkey)
{
 struct neighbour *n;

 rcu_read_lock();
 n = __ipv6_neigh_lookup_noref(dev, pkey);
 neigh_confirm(n);
 rcu_read_unlock();
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __ipv6_confirm_neigh_stub(struct net_device *dev,
          const void *pkey)
{
 struct neighbour *n;

 rcu_read_lock();
 n = __ipv6_neigh_lookup_noref_stub(dev, pkey);
 neigh_confirm(n);
 rcu_read_unlock();
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *ip_neigh_gw6(struct net_device *dev,
          const void *addr)
{
 struct neighbour *neigh;

 neigh = __ipv6_neigh_lookup_noref_stub(dev, addr);
 if (__builtin_expect(!!(!neigh), 0))
  neigh = __neigh_create(ipv6_stub->nd_tbl, addr, dev, false);

 return neigh;
}

int ndisc_init(void);
int ndisc_late_init(void);

void ndisc_late_cleanup(void);
void ndisc_cleanup(void);

enum skb_drop_reason ndisc_rcv(struct sk_buff *skb);

struct sk_buff *ndisc_ns_create(struct net_device *dev, const struct in6_addr *solicit,
    const struct in6_addr *saddr, u64 nonce);
void ndisc_send_ns(struct net_device *dev, const struct in6_addr *solicit,
     const struct in6_addr *daddr, const struct in6_addr *saddr,
     u64 nonce);

void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
      const struct in6_addr *saddr);

void ndisc_send_rs(struct net_device *dev,
     const struct in6_addr *saddr, const struct in6_addr *daddr);
void ndisc_send_na(struct net_device *dev, const struct in6_addr *daddr,
     const struct in6_addr *solicited_addr,
     bool router, bool solicited, bool override, bool inc_opt);

void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target);

int ndisc_mc_map(const struct in6_addr *addr, char *buf, struct net_device *dev,
   int dir);

void ndisc_update(const struct net_device *dev, struct neighbour *neigh,
    const u8 *lladdr, u8 new, u32 flags, u8 icmp6_type,
    struct ndisc_options *ndopts);




int igmp6_init(void);
int igmp6_late_init(void);

void igmp6_cleanup(void);
void igmp6_late_cleanup(void);

void igmp6_event_query(struct sk_buff *skb);

void igmp6_event_report(struct sk_buff *skb);



int ndisc_ifinfo_sysctl_change(const struct ctl_table *ctl, int write,
          void *buffer, size_t *lenp, loff_t *ppos);


void inet6_ifinfo_notify(int event, struct inet6_dev *idev);
# 30 "../include/net/route.h" 2
# 1 "../include/uapi/linux/in_route.h" 1
# 31 "../include/net/route.h" 2


# 1 "../include/uapi/linux/route.h" 1
# 31 "../include/uapi/linux/route.h"
struct rtentry {
 unsigned long rt_pad1;
 struct sockaddr rt_dst;
 struct sockaddr rt_gateway;
 struct sockaddr rt_genmask;
 unsigned short rt_flags;
 short rt_pad2;
 unsigned long rt_pad3;
 void *rt_pad4;
 short rt_metric;
 char *rt_dev;
 unsigned long rt_mtu;



 unsigned long rt_window;
 unsigned short rt_irtt;
};
# 34 "../include/net/route.h" 2




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 ip_sock_rt_scope(const struct sock *sk)
{
 if (sock_flag(sk, SOCK_LOCALROUTE))
  return RT_SCOPE_LINK;

 return RT_SCOPE_UNIVERSE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 ip_sock_rt_tos(const struct sock *sk)
{
 return ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_531(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos) == sizeof(long long))) __compiletime_assert_531(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->tos)); }))&0x1E);
}

struct ip_tunnel_info;
struct fib_nh;
struct fib_info;
struct uncached_list;
struct rtable {
 struct dst_entry dst;

 int rt_genid;
 unsigned int rt_flags;
 __u16 rt_type;
 __u8 rt_is_input;
 __u8 rt_uses_gateway;

 int rt_iif;

 u8 rt_gw_family;

 union {
  __be32 rt_gw4;
  struct in6_addr rt_gw6;
 };


 u32 rt_mtu_locked:1,
    rt_pmtu:31;
};







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *skb_rtable(const struct sk_buff *skb)
{
 return _Generic(skb_dst(skb), const typeof(*(skb_dst(skb))) *: ((const struct rtable *)({ void *__mptr = (void *)(skb_dst(skb)); _Static_assert(__builtin_types_compatible_p(typeof(*(skb_dst(skb))), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(skb_dst(skb))), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })), default: ((struct rtable *)({ void *__mptr = (void *)(skb_dst(skb)); _Static_assert(__builtin_types_compatible_p(typeof(*(skb_dst(skb))), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(skb_dst(skb))), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })) );
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rt_is_input_route(const struct rtable *rt)
{
 return rt->rt_is_input != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rt_is_output_route(const struct rtable *rt)
{
 return rt->rt_is_input == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be32 rt_nexthop(const struct rtable *rt, __be32 daddr)
{
 if (rt->rt_gw_family == 2)
  return rt->rt_gw4;
 return daddr;
}

struct ip_rt_acct {
 __u32 o_bytes;
 __u32 o_packets;
 __u32 i_bytes;
 __u32 i_packets;
};

struct rt_cache_stat {
        unsigned int in_slow_tot;
        unsigned int in_slow_mc;
        unsigned int in_no_route;
        unsigned int in_brd;
        unsigned int in_martian_dst;
        unsigned int in_martian_src;
        unsigned int out_slow_tot;
        unsigned int out_slow_mc;
};

extern struct ip_rt_acct *ip_rt_acct;

struct in_device;

int ip_rt_init(void);
void rt_cache_flush(struct net *net);
void rt_flush_dev(struct net_device *dev);
struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *flp,
     const struct sk_buff *skb);
struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *flp,
         struct fib_result *res,
         const struct sk_buff *skb);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *__ip_route_output_key(struct net *net,
         struct flowi4 *flp)
{
 return ip_route_output_key_hash(net, flp, ((void *)0));
}

struct rtable *ip_route_output_flow(struct net *, struct flowi4 *flp,
        const struct sock *sk);
struct dst_entry *ipv4_blackhole_route(struct net *net,
           struct dst_entry *dst_orig);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_output_key(struct net *net, struct flowi4 *flp)
{
 return ip_route_output_flow(net, flp, ((void *)0));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_output(struct net *net, __be32 daddr,
          __be32 saddr, u8 tos, int oif,
          __u8 scope)
{
 struct flowi4 fl4 = {
  .__fl_common.flowic_oif = oif,
  .__fl_common.flowic_tos = tos,
  .__fl_common.flowic_scope = scope,
  .daddr = daddr,
  .saddr = saddr,
 };

 return ip_route_output_key(net, &fl4);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_output_ports(struct net *net, struct flowi4 *fl4,
         const struct sock *sk,
         __be32 daddr, __be32 saddr,
         __be16 dport, __be16 sport,
         __u8 proto, __u8 tos, int oif)
{
 flowi4_init_output(fl4, oif, sk ? ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_532(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_mark) == sizeof(char) || sizeof(sk->sk_mark) == sizeof(short) || sizeof(sk->sk_mark) == sizeof(int) || sizeof(sk->sk_mark) == sizeof(long)) || sizeof(sk->sk_mark) == sizeof(long long))) __compiletime_assert_532(); } while (0); (*(const volatile typeof( _Generic((sk->sk_mark), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_mark))) *)&(sk->sk_mark)); }) : 0, tos,
      sk ? ip_sock_rt_scope(sk) : RT_SCOPE_UNIVERSE,
      proto, sk ? inet_sk_flowi_flags(sk) : 0,
      daddr, saddr, dport, sport, sock_net_uid(net, sk));
 if (sk)
  security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
 return ip_route_output_flow(net, fl4, sk);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_output_gre(struct net *net, struct flowi4 *fl4,
       __be32 daddr, __be32 saddr,
       __be32 gre_key, __u8 tos, int oif)
{
 memset(fl4, 0, sizeof(*fl4));
 fl4->__fl_common.flowic_oif = oif;
 fl4->daddr = daddr;
 fl4->saddr = saddr;
 fl4->__fl_common.flowic_tos = tos;
 fl4->__fl_common.flowic_proto = IPPROTO_GRE;
 fl4->uli.gre_key = gre_key;
 return ip_route_output_key(net, fl4);
}
int ip_mc_validate_source(struct sk_buff *skb, __be32 daddr, __be32 saddr,
     u8 tos, struct net_device *dev,
     struct in_device *in_dev, u32 *itag);
int ip_route_input_noref(struct sk_buff *skb, __be32 dst, __be32 src,
    u8 tos, struct net_device *devin);
int ip_route_use_hint(struct sk_buff *skb, __be32 dst, __be32 src,
        u8 tos, struct net_device *devin,
        const struct sk_buff *hint);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src,
     u8 tos, struct net_device *devin)
{
 int err;

 rcu_read_lock();
 err = ip_route_input_noref(skb, dst, src, tos, devin);
 if (!err) {
  skb_dst_force(skb);
  if (!skb_dst(skb))
   err = -22;
 }
 rcu_read_unlock();

 return err;
}

void ipv4_update_pmtu(struct sk_buff *skb, struct net *net, u32 mtu, int oif,
        u8 protocol);
void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu);
void ipv4_redirect(struct sk_buff *skb, struct net *net, int oif, u8 protocol);
void ipv4_sk_redirect(struct sk_buff *skb, struct sock *sk);
void ip_rt_send_redirect(struct sk_buff *skb);

unsigned int inet_addr_type(struct net *net, __be32 addr);
unsigned int inet_addr_type_table(struct net *net, __be32 addr, u32 tb_id);
unsigned int inet_dev_addr_type(struct net *net, const struct net_device *dev,
    __be32 addr);
unsigned int inet_addr_type_dev_table(struct net *net,
          const struct net_device *dev,
          __be32 addr);
void ip_rt_multicast_event(struct in_device *);
int ip_rt_ioctl(struct net *, unsigned int cmd, struct rtentry *rt);
void ip_rt_get_source(u8 *src, struct sk_buff *skb, struct rtable *rt);
struct rtable *rt_dst_alloc(struct net_device *dev,
       unsigned int flags, u16 type, bool noxfrm);
struct rtable *rt_dst_clone(struct net_device *dev, struct rtable *rt);

struct in_ifaddr;
void fib_add_ifaddr(struct in_ifaddr *);
void fib_del_ifaddr(struct in_ifaddr *, struct in_ifaddr *);
void fib_modify_prefix_metric(struct in_ifaddr *ifa, u32 new_metric);

void rt_add_uncached_list(struct rtable *rt);
void rt_del_uncached_list(struct rtable *rt);

int fib_dump_info_fnhe(struct sk_buff *skb, struct netlink_callback *cb,
         u32 table_id, struct fib_info *fi,
         int *fa_index, int fa_start, unsigned int flags);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_rt_put(struct rtable *rt)
{



 do { __attribute__((__noreturn__)) extern void __compiletime_assert_533(void) __attribute__((__error__("BUILD_BUG_ON failed: " "offsetof(struct rtable, dst) != 0"))); if (!(!(__builtin_offsetof(struct rtable, dst) != 0))) __compiletime_assert_533(); } while (0);
 dst_release(&rt->dst);
}



extern const __u8 ip_tos2prio[16];

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) char rt_tos2priority(u8 tos)
{
 return ip_tos2prio[((tos)&0x1E)>>1];
}
# 301 "../include/net/route.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_route_connect_init(struct flowi4 *fl4, __be32 dst,
      __be32 src, int oif, u8 protocol,
      __be16 sport, __be16 dport,
      const struct sock *sk)
{
 __u8 flow_flags = 0;

 if (((__builtin_constant_p(INET_FLAGS_TRANSPARENT) && __builtin_constant_p((uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0)) && (uintptr_t)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags))) ? const_test_bit(INET_FLAGS_TRANSPARENT, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags) : arch_test_bit(INET_FLAGS_TRANSPARENT, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_flags)))
  flow_flags |= 0x01;

 flowi4_init_output(fl4, oif, ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_534(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(sk->sk_mark) == sizeof(char) || sizeof(sk->sk_mark) == sizeof(short) || sizeof(sk->sk_mark) == sizeof(int) || sizeof(sk->sk_mark) == sizeof(long)) || sizeof(sk->sk_mark) == sizeof(long long))) __compiletime_assert_534(); } while (0); (*(const volatile typeof( _Generic((sk->sk_mark), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (sk->sk_mark))) *)&(sk->sk_mark)); }), ip_sock_rt_tos(sk),
      ip_sock_rt_scope(sk), protocol, flow_flags, dst,
      src, dport, sport, sk->sk_uid);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_connect(struct flowi4 *fl4, __be32 dst,
           __be32 src, int oif, u8 protocol,
           __be16 sport, __be16 dport,
           const struct sock *sk)
{
 struct net *net = sock_net(sk);
 struct rtable *rt;

 ip_route_connect_init(fl4, dst, src, oif, protocol, sport, dport, sk);

 if (!dst || !src) {
  rt = __ip_route_output_key(net, fl4);
  if (IS_ERR(rt))
   return rt;
  ip_rt_put(rt);
  flowi4_update_output(fl4, oif, fl4->daddr, fl4->saddr);
 }
 security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
 return ip_route_output_flow(net, fl4, sk);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rtable *ip_route_newports(struct flowi4 *fl4, struct rtable *rt,
            __be16 orig_sport, __be16 orig_dport,
            __be16 sport, __be16 dport,
            const struct sock *sk)
{
 if (sport != orig_sport || dport != orig_dport) {
  fl4->uli.ports.dport = dport;
  fl4->uli.ports.sport = sport;
  ip_rt_put(rt);
  flowi4_update_output(fl4, sk->__sk_common.skc_bound_dev_if, fl4->daddr,
         fl4->saddr);
  security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
  return ip_route_output_flow(sock_net(sk), fl4, sk);
 }
 return rt;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_iif(const struct sk_buff *skb)
{
 struct rtable *rt = skb_rtable(skb);

 if (rt && rt->rt_iif)
  return rt->rt_iif;

 return skb->skb_iif;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip4_dst_hoplimit(const struct dst_entry *dst)
{
 int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);
 struct net *net = dev_net(dst->dev);

 if (hoplimit == 0)
  hoplimit = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_535(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(net->ipv4.sysctl_ip_default_ttl) == sizeof(char) || sizeof(net->ipv4.sysctl_ip_default_ttl) == sizeof(short) || sizeof(net->ipv4.sysctl_ip_default_ttl) == sizeof(int) || sizeof(net->ipv4.sysctl_ip_default_ttl) == sizeof(long)) || sizeof(net->ipv4.sysctl_ip_default_ttl) == sizeof(long long))) __compiletime_assert_535(); } while (0); (*(const volatile typeof( _Generic((net->ipv4.sysctl_ip_default_ttl), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (net->ipv4.sysctl_ip_default_ttl))) *)&(net->ipv4.sysctl_ip_default_ttl)); });
 return hoplimit;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *ip_neigh_gw4(struct net_device *dev,
          __be32 daddr)
{
 struct neighbour *neigh;

 neigh = __ipv4_neigh_lookup_noref(dev, ( u32)daddr);
 if (__builtin_expect(!!(!neigh), 0))
  neigh = __neigh_create(&arp_tbl, &daddr, dev, false);

 return neigh;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct neighbour *ip_neigh_for_gw(struct rtable *rt,
      struct sk_buff *skb,
      bool *is_v6gw)
{
 struct net_device *dev = rt->dst.dev;
 struct neighbour *neigh;

 if (__builtin_expect(!!(rt->rt_gw_family == 2), 1)) {
  neigh = ip_neigh_gw4(dev, rt->rt_gw4);
 } else if (rt->rt_gw_family == 10) {
  neigh = ip_neigh_gw6(dev, &rt->rt_gw6);
  *is_v6gw = true;
 } else {
  neigh = ip_neigh_gw4(dev, ip_hdr(skb)->daddr);
 }
 return neigh;
}
# 31 "../include/net/ip.h" 2




# 1 "../include/net/lwtunnel.h" 1




# 1 "../include/uapi/linux/lwtunnel.h" 1






enum lwtunnel_encap_types {
 LWTUNNEL_ENCAP_NONE,
 LWTUNNEL_ENCAP_MPLS,
 LWTUNNEL_ENCAP_IP,
 LWTUNNEL_ENCAP_ILA,
 LWTUNNEL_ENCAP_IP6,
 LWTUNNEL_ENCAP_SEG6,
 LWTUNNEL_ENCAP_BPF,
 LWTUNNEL_ENCAP_SEG6_LOCAL,
 LWTUNNEL_ENCAP_RPL,
 LWTUNNEL_ENCAP_IOAM6,
 LWTUNNEL_ENCAP_XFRM,
 __LWTUNNEL_ENCAP_MAX,
};



enum lwtunnel_ip_t {
 LWTUNNEL_IP_UNSPEC,
 LWTUNNEL_IP_ID,
 LWTUNNEL_IP_DST,
 LWTUNNEL_IP_SRC,
 LWTUNNEL_IP_TTL,
 LWTUNNEL_IP_TOS,
 LWTUNNEL_IP_FLAGS,
 LWTUNNEL_IP_PAD,
 LWTUNNEL_IP_OPTS,
 __LWTUNNEL_IP_MAX,
};



enum lwtunnel_ip6_t {
 LWTUNNEL_IP6_UNSPEC,
 LWTUNNEL_IP6_ID,
 LWTUNNEL_IP6_DST,
 LWTUNNEL_IP6_SRC,
 LWTUNNEL_IP6_HOPLIMIT,
 LWTUNNEL_IP6_TC,
 LWTUNNEL_IP6_FLAGS,
 LWTUNNEL_IP6_PAD,
 LWTUNNEL_IP6_OPTS,
 __LWTUNNEL_IP6_MAX,
};



enum {
 LWTUNNEL_IP_OPTS_UNSPEC,
 LWTUNNEL_IP_OPTS_GENEVE,
 LWTUNNEL_IP_OPTS_VXLAN,
 LWTUNNEL_IP_OPTS_ERSPAN,
 __LWTUNNEL_IP_OPTS_MAX,
};



enum {
 LWTUNNEL_IP_OPT_GENEVE_UNSPEC,
 LWTUNNEL_IP_OPT_GENEVE_CLASS,
 LWTUNNEL_IP_OPT_GENEVE_TYPE,
 LWTUNNEL_IP_OPT_GENEVE_DATA,
 __LWTUNNEL_IP_OPT_GENEVE_MAX,
};



enum {
 LWTUNNEL_IP_OPT_VXLAN_UNSPEC,
 LWTUNNEL_IP_OPT_VXLAN_GBP,
 __LWTUNNEL_IP_OPT_VXLAN_MAX,
};



enum {
 LWTUNNEL_IP_OPT_ERSPAN_UNSPEC,
 LWTUNNEL_IP_OPT_ERSPAN_VER,
 LWTUNNEL_IP_OPT_ERSPAN_INDEX,
 LWTUNNEL_IP_OPT_ERSPAN_DIR,
 LWTUNNEL_IP_OPT_ERSPAN_HWID,
 __LWTUNNEL_IP_OPT_ERSPAN_MAX,
};



enum {
 LWT_BPF_PROG_UNSPEC,
 LWT_BPF_PROG_FD,
 LWT_BPF_PROG_NAME,
 __LWT_BPF_PROG_MAX,
};



enum {
 LWT_BPF_UNSPEC,
 LWT_BPF_IN,
 LWT_BPF_OUT,
 LWT_BPF_XMIT,
 LWT_BPF_XMIT_HEADROOM,
 __LWT_BPF_MAX,
};





enum {
 LWT_XFRM_UNSPEC,
 LWT_XFRM_IF_ID,
 LWT_XFRM_LINK,
 __LWT_XFRM_MAX,
};
# 6 "../include/net/lwtunnel.h" 2
# 22 "../include/net/lwtunnel.h"
enum {
 LWTUNNEL_XMIT_DONE,
 LWTUNNEL_XMIT_CONTINUE = 0x100,
};


struct lwtunnel_state {
 __u16 type;
 __u16 flags;
 __u16 headroom;
 atomic_t refcnt;
 int (*orig_output)(struct net *net, struct sock *sk, struct sk_buff *skb);
 int (*orig_input)(struct sk_buff *);
 struct callback_head rcu;
 __u8 data[];
};

struct lwtunnel_encap_ops {
 int (*build_state)(struct net *net, struct nlattr *encap,
      unsigned int family, const void *cfg,
      struct lwtunnel_state **ts,
      struct netlink_ext_ack *extack);
 void (*destroy_state)(struct lwtunnel_state *lws);
 int (*output)(struct net *net, struct sock *sk, struct sk_buff *skb);
 int (*input)(struct sk_buff *skb);
 int (*fill_encap)(struct sk_buff *skb,
     struct lwtunnel_state *lwtstate);
 int (*get_encap_size)(struct lwtunnel_state *lwtstate);
 int (*cmp_encap)(struct lwtunnel_state *a, struct lwtunnel_state *b);
 int (*xmit)(struct sk_buff *skb);

 struct module *owner;
};



extern struct static_key_false nf_hooks_lwtunnel_enabled;

void lwtstate_free(struct lwtunnel_state *lws);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct lwtunnel_state *
lwtstate_get(struct lwtunnel_state *lws)
{
 if (lws)
  atomic_inc(&lws->refcnt);

 return lws;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lwtstate_put(struct lwtunnel_state *lws)
{
 if (!lws)
  return;

 if (atomic_dec_and_test(&lws->refcnt))
  lwtstate_free(lws);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lwtunnel_output_redirect(struct lwtunnel_state *lwtstate)
{
 if (lwtstate && (lwtstate->flags & ((((1UL))) << (0))))
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lwtunnel_input_redirect(struct lwtunnel_state *lwtstate)
{
 if (lwtstate && (lwtstate->flags & ((((1UL))) << (1))))
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool lwtunnel_xmit_redirect(struct lwtunnel_state *lwtstate)
{
 if (lwtstate && (lwtstate->flags & ((((1UL))) << (2))))
  return true;

 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int lwtunnel_headroom(struct lwtunnel_state *lwtstate,
          unsigned int mtu)
{
 if ((lwtunnel_xmit_redirect(lwtstate) ||
      lwtunnel_output_redirect(lwtstate)) && lwtstate->headroom < mtu)
  return lwtstate->headroom;

 return 0;
}

int lwtunnel_encap_add_ops(const struct lwtunnel_encap_ops *op,
      unsigned int num);
int lwtunnel_encap_del_ops(const struct lwtunnel_encap_ops *op,
      unsigned int num);
int lwtunnel_valid_encap_type(u16 encap_type,
         struct netlink_ext_ack *extack);
int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int len,
       struct netlink_ext_ack *extack);
int lwtunnel_build_state(struct net *net, u16 encap_type,
    struct nlattr *encap,
    unsigned int family, const void *cfg,
    struct lwtunnel_state **lws,
    struct netlink_ext_ack *extack);
int lwtunnel_fill_encap(struct sk_buff *skb, struct lwtunnel_state *lwtstate,
   int encap_attr, int encap_type_attr);
int lwtunnel_get_encap_size(struct lwtunnel_state *lwtstate);
struct lwtunnel_state *lwtunnel_state_alloc(int hdr_len);
int lwtunnel_cmp_encap(struct lwtunnel_state *a, struct lwtunnel_state *b);
int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb);
int lwtunnel_input(struct sk_buff *skb);
int lwtunnel_xmit(struct sk_buff *skb);
int bpf_lwt_push_ip_encap(struct sk_buff *skb, void *hdr, u32 len,
     bool ingress);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void lwtunnel_set_redirect(struct dst_entry *dst)
{
 if (lwtunnel_output_redirect(dst->lwtstate)) {
  dst->lwtstate->orig_output = dst->output;
  dst->output = lwtunnel_output;
 }
 if (lwtunnel_input_redirect(dst->lwtstate)) {
  dst->lwtstate->orig_input = dst->input;
  dst->input = lwtunnel_input;
 }
}
# 36 "../include/net/ip.h" 2




extern unsigned int sysctl_fib_sync_mem;
extern unsigned int sysctl_fib_sync_mem_min;
extern unsigned int sysctl_fib_sync_mem_max;

struct sock;

struct inet_skb_parm {
 int iif;
 struct ip_options opt;
 u16 flags;
# 62 "../include/net/ip.h"
 u16 frag_max_size;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ipv4_l3mdev_skb(u16 flags)
{
 return !!(flags & ((((1UL))) << (7)));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ip_hdrlen(const struct sk_buff *skb)
{
 return ip_hdr(skb)->ihl * 4;
}

struct ipcm_cookie {
 struct sockcm_cookie sockc;
 __be32 addr;
 int oif;
 struct ip_options_rcu *opt;
 __u8 protocol;
 __u8 ttl;
 __s16 tos;
 char priority;
 __u16 gso_size;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipcm_init(struct ipcm_cookie *ipcm)
{
 *ipcm = (struct ipcm_cookie) { .tos = -1 };
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ipcm_init_sk(struct ipcm_cookie *ipcm,
    const struct inet_sock *inet)
{
 ipcm_init(ipcm);

 ipcm->sockc.mark = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_536(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet->sk.sk_mark) == sizeof(char) || sizeof(inet->sk.sk_mark) == sizeof(short) || sizeof(inet->sk.sk_mark) == sizeof(int) || sizeof(inet->sk.sk_mark) == sizeof(long)) || sizeof(inet->sk.sk_mark) == sizeof(long long))) __compiletime_assert_536(); } while (0); (*(const volatile typeof( _Generic((inet->sk.sk_mark), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet->sk.sk_mark))) *)&(inet->sk.sk_mark)); });
 ipcm->sockc.tsflags = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_537(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet->sk.sk_tsflags) == sizeof(char) || sizeof(inet->sk.sk_tsflags) == sizeof(short) || sizeof(inet->sk.sk_tsflags) == sizeof(int) || sizeof(inet->sk.sk_tsflags) == sizeof(long)) || sizeof(inet->sk.sk_tsflags) == sizeof(long long))) __compiletime_assert_537(); } while (0); (*(const volatile typeof( _Generic((inet->sk.sk_tsflags), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet->sk.sk_tsflags))) *)&(inet->sk.sk_tsflags)); });
 ipcm->oif = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_538(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet->sk.__sk_common.skc_bound_dev_if) == sizeof(char) || sizeof(inet->sk.__sk_common.skc_bound_dev_if) == sizeof(short) || sizeof(inet->sk.__sk_common.skc_bound_dev_if) == sizeof(int) || sizeof(inet->sk.__sk_common.skc_bound_dev_if) == sizeof(long)) || sizeof(inet->sk.__sk_common.skc_bound_dev_if) == sizeof(long long))) __compiletime_assert_538(); } while (0); (*(const volatile typeof( _Generic((inet->sk.__sk_common.skc_bound_dev_if), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet->sk.__sk_common.skc_bound_dev_if))) *)&(inet->sk.__sk_common.skc_bound_dev_if)); });
 ipcm->addr = inet->inet_saddr;
 ipcm->protocol = inet->sk.__sk_common.skc_num;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int inet_sdif(const struct sk_buff *skb)
{




 return 0;
}
# 128 "../include/net/ip.h"
struct ip_ra_chain {
 struct ip_ra_chain *next;
 struct sock *sk;
 union {
  void (*destructor)(struct sock *);
  struct sock *saved_sk;
 };
 struct callback_head rcu;
};
# 146 "../include/net/ip.h"
struct msghdr;
struct net_device;
struct packet_type;
struct rtable;
struct sockaddr;

int igmp_mc_init(void);





int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
     __be32 saddr, __be32 daddr,
     struct ip_options_rcu *opt, u8 tos);
int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
    struct net_device *orig_dev);
void ip_list_rcv(struct list_head *head, struct packet_type *pt,
   struct net_device *orig_dev);
int ip_local_deliver(struct sk_buff *skb);
void ip_protocol_deliver_rcu(struct net *net, struct sk_buff *skb, int proto);
int ip_mr_input(struct sk_buff *skb);
int ip_output(struct net *net, struct sock *sk, struct sk_buff *skb);
int ip_mc_output(struct net *net, struct sock *sk, struct sk_buff *skb);
int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
     int (*output)(struct net *, struct sock *, struct sk_buff *));

struct ip_fraglist_iter {
 struct sk_buff *frag;
 struct iphdr *iph;
 int offset;
 unsigned int hlen;
};

void ip_fraglist_init(struct sk_buff *skb, struct iphdr *iph,
        unsigned int hlen, struct ip_fraglist_iter *iter);
void ip_fraglist_prepare(struct sk_buff *skb, struct ip_fraglist_iter *iter);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *ip_fraglist_next(struct ip_fraglist_iter *iter)
{
 struct sk_buff *skb = iter->frag;

 iter->frag = skb->next;
 skb_mark_not_on_list(skb);

 return skb;
}

struct ip_frag_state {
 bool DF;
 unsigned int hlen;
 unsigned int ll_rs;
 unsigned int mtu;
 unsigned int left;
 int offset;
 int ptr;
 __be16 not_last_frag;
};

void ip_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int ll_rs,
    unsigned int mtu, bool DF, struct ip_frag_state *state);
struct sk_buff *ip_frag_next(struct sk_buff *skb,
        struct ip_frag_state *state);

void ip_send_check(struct iphdr *ip);
int __ip_local_out(struct net *net, struct sock *sk, struct sk_buff *skb);
int ip_local_out(struct net *net, struct sock *sk, struct sk_buff *skb);

int __ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl,
      __u8 tos);
void ip_init(void);
int ip_append_data(struct sock *sk, struct flowi4 *fl4,
     int getfrag(void *from, char *to, int offset, int len,
          int odd, struct sk_buff *skb),
     void *from, int len, int protolen,
     struct ipcm_cookie *ipc,
     struct rtable **rt,
     unsigned int flags);
int ip_generic_getfrag(void *from, char *to, int offset, int len, int odd,
         struct sk_buff *skb);
struct sk_buff *__ip_make_skb(struct sock *sk, struct flowi4 *fl4,
         struct sk_buff_head *queue,
         struct inet_cork *cork);
int ip_send_skb(struct net *net, struct sk_buff *skb);
int ip_push_pending_frames(struct sock *sk, struct flowi4 *fl4);
void ip_flush_pending_frames(struct sock *sk);
struct sk_buff *ip_make_skb(struct sock *sk, struct flowi4 *fl4,
       int getfrag(void *from, char *to, int offset,
     int len, int odd, struct sk_buff *skb),
       void *from, int length, int transhdrlen,
       struct ipcm_cookie *ipc, struct rtable **rtp,
       struct inet_cork *cork, unsigned int flags);

int ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct sk_buff *ip_finish_skb(struct sock *sk, struct flowi4 *fl4)
{
 return __ip_make_skb(sk, fl4, &sk->sk_write_queue, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->cork.base);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 ip_sendmsg_scope(const struct inet_sock *inet,
      const struct ipcm_cookie *ipc,
      const struct msghdr *msg)
{
 if (sock_flag(&inet->sk, SOCK_LOCALROUTE) ||
     msg->msg_flags & 4 ||
     (ipc->opt && ipc->opt->opt.is_strictroute))
  return RT_SCOPE_LINK;

 return RT_SCOPE_UNIVERSE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 get_rttos(struct ipcm_cookie* ipc, struct inet_sock *inet)
{
 return (ipc->tos != -1) ? ((ipc->tos)&0x1E) : ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_539(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(inet->tos) == sizeof(char) || sizeof(inet->tos) == sizeof(short) || sizeof(inet->tos) == sizeof(int) || sizeof(inet->tos) == sizeof(long)) || sizeof(inet->tos) == sizeof(long long))) __compiletime_assert_539(); } while (0); (*(const volatile typeof( _Generic((inet->tos), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (inet->tos))) *)&(inet->tos)); }))&0x1E);
}


int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len);
int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len);

void ip4_datagram_release_cb(struct sock *sk);

struct ip_reply_arg {
 struct kvec iov[1];
 int flags;
 __wsum csum;
 int csumoffset;

 int bound_dev_if;
 u8 tos;
 kuid_t uid;
};



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __u8 ip_reply_arg_flowi_flags(const struct ip_reply_arg *arg)
{
 return (arg->flags & 1) ? 0x01 : 0;
}

void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
      const struct ip_options *sopt,
      __be32 daddr, __be32 saddr,
      const struct ip_reply_arg *arg,
      unsigned int len, u64 transmit_time, u32 txhash);
# 305 "../include/net/ip.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 snmp_get_cpu_field(void *mib, int cpu, int offt)
{
 return *(((unsigned long *)({ (void)(cpu); ({ do { const void *__vpp_verify = (typeof((mib) + 0))((void *)0); (void)__vpp_verify; } while (0); (typeof(*(mib)) *)(mib); }); })) + offt);
}

unsigned long snmp_fold_field(void *mib, int offt);

u64 snmp_get_cpu_field64(void *mib, int cpu, int offct,
    size_t syncp_offset);
u64 snmp_fold_field64(void *mib, int offt, size_t sync_off);
# 352 "../include/net/ip.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_get_local_port_range(const struct net *net, int *low, int *high)
{
 u32 range = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_540(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(net->ipv4.ip_local_ports.range) == sizeof(char) || sizeof(net->ipv4.ip_local_ports.range) == sizeof(short) || sizeof(net->ipv4.ip_local_ports.range) == sizeof(int) || sizeof(net->ipv4.ip_local_ports.range) == sizeof(long)) || sizeof(net->ipv4.ip_local_ports.range) == sizeof(long long))) __compiletime_assert_540(); } while (0); (*(const volatile typeof( _Generic((net->ipv4.ip_local_ports.range), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (net->ipv4.ip_local_ports.range))) *)&(net->ipv4.ip_local_ports.range)); });

 *low = range & 0xffff;
 *high = range >> 16;
}
bool inet_sk_get_local_port_range(const struct sock *sk, int *low, int *high);


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_is_local_reserved_port(struct net *net, unsigned short port)
{
 if (!net->ipv4.sysctl_local_reserved_ports)
  return false;
 return ((__builtin_constant_p(port) && __builtin_constant_p((uintptr_t)(net->ipv4.sysctl_local_reserved_ports) != (uintptr_t)((void *)0)) && (uintptr_t)(net->ipv4.sysctl_local_reserved_ports) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(net->ipv4.sysctl_local_reserved_ports))) ? const_test_bit(port, net->ipv4.sysctl_local_reserved_ports) : arch_test_bit(port, net->ipv4.sysctl_local_reserved_ports));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool sysctl_dev_name_is_allowed(const char *name)
{
 return strcmp(name, "default") != 0 && strcmp(name, "all") != 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inet_port_requires_bind_service(struct net *net, unsigned short port)
{
 return port < ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_541(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(net->ipv4.sysctl_ip_prot_sock) == sizeof(char) || sizeof(net->ipv4.sysctl_ip_prot_sock) == sizeof(short) || sizeof(net->ipv4.sysctl_ip_prot_sock) == sizeof(int) || sizeof(net->ipv4.sysctl_ip_prot_sock) == sizeof(long)) || sizeof(net->ipv4.sysctl_ip_prot_sock) == sizeof(long long))) __compiletime_assert_541(); } while (0); (*(const volatile typeof( _Generic((net->ipv4.sysctl_ip_prot_sock), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (net->ipv4.sysctl_ip_prot_sock))) *)&(net->ipv4.sysctl_ip_prot_sock)); });
}
# 391 "../include/net/ip.h"
__be32 inet_current_timestamp(void);


extern int inet_peer_threshold;
extern int inet_peer_minttl;
extern int inet_peer_maxttl;

void ipfrag_init(void);

void ip_static_sysctl_init(void);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ip_is_fragment(const struct iphdr *iph)
{
 return (iph->frag_off & (( __be16)(__u16)(__builtin_constant_p((0x2000 | 0x1FFF)) ? ((__u16)( (((__u16)((0x2000 | 0x1FFF)) & (__u16)0x00ffU) << 8) | (((__u16)((0x2000 | 0x1FFF)) & (__u16)0xff00U) >> 8))) : __fswab16((0x2000 | 0x1FFF))))) != 0;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int ip_decrease_ttl(struct iphdr *iph)
{
 u32 check = ( u32)iph->check;
 check += ( u32)(( __be16)(__u16)(__builtin_constant_p((0x0100)) ? ((__u16)( (((__u16)((0x0100)) & (__u16)0x00ffU) << 8) | (((__u16)((0x0100)) & (__u16)0xff00U) >> 8))) : __fswab16((0x0100))));
 iph->check = ( __sum16)(check + (check>=0xFFFF));
 return --iph->ttl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip_mtu_locked(const struct dst_entry *dst)
{
 const struct rtable *rt = _Generic(dst, const typeof(*(dst)) *: ((const struct rtable *)({ void *__mptr = (void *)(dst); _Static_assert(__builtin_types_compatible_p(typeof(*(dst)), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(dst)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })), default: ((struct rtable *)({ void *__mptr = (void *)(dst); _Static_assert(__builtin_types_compatible_p(typeof(*(dst)), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(dst)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })) );

 return rt->rt_mtu_locked || dst_metric_locked(dst, RTAX_MTU);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
int ip_dont_fragment(const struct sock *sk, const struct dst_entry *dst)
{
 u8 pmtudisc = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_542(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long long))) __compiletime_assert_542(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc)); });

 return pmtudisc == 2 ||
  (pmtudisc == 1 &&
   !ip_mtu_locked(dst));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ip_sk_accept_pmtu(const struct sock *sk)
{
 u8 pmtudisc = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_543(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long long))) __compiletime_assert_543(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc)); });

 return pmtudisc != 4 &&
        pmtudisc != 5;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ip_sk_use_pmtu(const struct sock *sk)
{
 return ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_544(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long long))) __compiletime_assert_544(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc)); }) < 3;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ip_sk_ignore_df(const struct sock *sk)
{
 u8 pmtudisc = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_545(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(char) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(short) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(int) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long)) || sizeof(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc) == sizeof(long long))) __compiletime_assert_545(); } while (0); (*(const volatile typeof( _Generic((_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc))) *)&(_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->pmtudisc)); });

 return pmtudisc < 2 || pmtudisc == 5;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
          bool forwarding)
{
 const struct rtable *rt = _Generic(dst, const typeof(*(dst)) *: ((const struct rtable *)({ void *__mptr = (void *)(dst); _Static_assert(__builtin_types_compatible_p(typeof(*(dst)), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(dst)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })), default: ((struct rtable *)({ void *__mptr = (void *)(dst); _Static_assert(__builtin_types_compatible_p(typeof(*(dst)), typeof(((struct rtable *)0)->dst)) || __builtin_types_compatible_p(typeof(*(dst)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rtable *)(__mptr - __builtin_offsetof(struct rtable, dst))); })) );
 struct net *net = dev_net(dst->dev);
 unsigned int mtu;

 if (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_546(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(net->ipv4.sysctl_ip_fwd_use_pmtu) == sizeof(char) || sizeof(net->ipv4.sysctl_ip_fwd_use_pmtu) == sizeof(short) || sizeof(net->ipv4.sysctl_ip_fwd_use_pmtu) == sizeof(int) || sizeof(net->ipv4.sysctl_ip_fwd_use_pmtu) == sizeof(long)) || sizeof(net->ipv4.sysctl_ip_fwd_use_pmtu) == sizeof(long long))) __compiletime_assert_546(); } while (0); (*(const volatile typeof( _Generic((net->ipv4.sysctl_ip_fwd_use_pmtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (net->ipv4.sysctl_ip_fwd_use_pmtu))) *)&(net->ipv4.sysctl_ip_fwd_use_pmtu)); }) ||
     ip_mtu_locked(dst) ||
     !forwarding) {
  mtu = rt->rt_pmtu;
  if (mtu && (({ unsigned long __dummy; typeof(rt->dst.expires) __dummy2; (void)(&__dummy == &__dummy2); 1; }) && ({ unsigned long __dummy; typeof(jiffies) __dummy2; (void)(&__dummy == &__dummy2); 1; }) && ((long)((jiffies) - (rt->dst.expires)) < 0)))
   goto out;
 }


 mtu = dst_metric_raw(dst, RTAX_MTU);
 if (mtu)
  goto out;

 mtu = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_547(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(dst->dev->mtu) == sizeof(char) || sizeof(dst->dev->mtu) == sizeof(short) || sizeof(dst->dev->mtu) == sizeof(int) || sizeof(dst->dev->mtu) == sizeof(long)) || sizeof(dst->dev->mtu) == sizeof(long long))) __compiletime_assert_547(); } while (0); (*(const volatile typeof( _Generic((dst->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (dst->dev->mtu))) *)&(dst->dev->mtu)); });

 if (__builtin_expect(!!(ip_mtu_locked(dst)), 0)) {
  if (rt->rt_uses_gateway && mtu > 576)
   mtu = 576;
 }

out:
 mtu = ({ unsigned int __UNIQUE_ID_x_548 = (mtu); unsigned int __UNIQUE_ID_y_549 = (0xFFFFU); ((__UNIQUE_ID_x_548) < (__UNIQUE_ID_y_549) ? (__UNIQUE_ID_x_548) : (__UNIQUE_ID_y_549)); });

 return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ip_skb_dst_mtu(struct sock *sk,
       const struct sk_buff *skb)
{
 unsigned int mtu;

 if (!sk || !sk_fullsock(sk) || ip_sk_use_pmtu(sk)) {
  bool forwarding = ((struct inet_skb_parm*)((skb)->cb))->flags & ((((1UL))) << (0));

  return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding);
 }

 mtu = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) - (0xFFFFU)) * 0l)) : (int *)8))), ((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) < (0xFFFFU) ? (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) : (0xFFFFU)), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))1)) * 0l)) : (int *)8))), (((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0xFFFFU))(-1)) < ( typeof(0xFFFFU))1)) * 0l)) : (int *)8))), (((typeof(0xFFFFU))(-1)) < ( typeof(0xFFFFU))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) + 0))(-1)) < ( typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) + 0))1)) * 0l)) : (int *)8))), (((typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) + 0))(-1)) < ( typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((0xFFFFU) + 0))(-1)) < ( typeof((0xFFFFU) + 0))1)) * 0l)) : (int *)8))), (((typeof((0xFFFFU) + 0))(-1)) < ( typeof((0xFFFFU) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))1)) * 0l)) : (int *)8))), (((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })))1), 0), ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); }), -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(0xFFFFU) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(0xFFFFU))(-1)) < ( typeof(0xFFFFU))1)) * 0l)) : (int *)8))), (((typeof(0xFFFFU))(-1)) < ( typeof(0xFFFFU))1), 0), 0xFFFFU, -1) >= 0)), "min" "(" "({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })" ", " "0xFFFFU" ") signedness error, fix types or consider u" "min" "() before " "min" "_t()"); ({ __auto_type __UNIQUE_ID_x_551 = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_550(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(skb_dst(skb)->dev->mtu) == sizeof(char) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(short) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(int) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long)) || sizeof(skb_dst(skb)->dev->mtu) == sizeof(long long))) __compiletime_assert_550(); } while (0); (*(const volatile typeof( _Generic((skb_dst(skb)->dev->mtu), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (skb_dst(skb)->dev->mtu))) *)&(skb_dst(skb)->dev->mtu)); })); __auto_type __UNIQUE_ID_y_552 = (0xFFFFU); ((__UNIQUE_ID_x_551) < (__UNIQUE_ID_y_552) ? (__UNIQUE_ID_x_551) : (__UNIQUE_ID_y_552)); }); }));
 return mtu - lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
}

struct dst_metrics *ip_fib_metrics_init(struct nlattr *fc_mx, int fc_mx_len,
     struct netlink_ext_ack *extack);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_fib_metrics_put(struct dst_metrics *fib_metrics)
{
 if (fib_metrics != &dst_default_metrics &&
     refcount_dec_and_test(&fib_metrics->refcnt))
  kfree(fib_metrics);
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void ip_dst_init_metrics(struct dst_entry *dst, struct dst_metrics *fib_metrics)
{
 dst_init_metrics(dst, fib_metrics->metrics, true);

 if (fib_metrics != &dst_default_metrics) {
  dst->_metrics |= 0x2UL;
  refcount_inc(&fib_metrics->refcnt);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void ip_dst_metrics_put(struct dst_entry *dst)
{
 struct dst_metrics *p = (struct dst_metrics *)((u32 *)(((dst)->_metrics) & ~0x3UL));

 if (p != &dst_default_metrics && refcount_dec_and_test(&p->refcnt))
  kfree(p);
}

void __ip_select_ident(struct net *net, struct iphdr *iph, int segs);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
     struct sock *sk, int segs)
{
 struct iphdr *iph = ip_hdr(skb);




 if (sk && _Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->sk.__sk_common.skc_daddr) {
  int val;




  if (sk_is_tcp(sk)) {
   sock_owned_by_me(sk);
   val = atomic_read(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_id);
   atomic_set(&_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_id, val + segs);
  } else {
   val = atomic_add_return(segs, &_Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_id);
  }
  iph->id = (( __be16)(__u16)(__builtin_constant_p((val)) ? ((__u16)( (((__u16)((val)) & (__u16)0x00ffU) << 8) | (((__u16)((val)) & (__u16)0xff00U) >> 8))) : __fswab16((val))));
  return;
 }
 if ((iph->frag_off & (( __be16)(__u16)(__builtin_constant_p((0x4000)) ? ((__u16)( (((__u16)((0x4000)) & (__u16)0x00ffU) << 8) | (((__u16)((0x4000)) & (__u16)0xff00U) >> 8))) : __fswab16((0x4000))))) && !skb->ignore_df) {
  iph->id = 0;
 } else {

  __ip_select_ident(net, iph, segs);
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_select_ident(struct net *net, struct sk_buff *skb,
       struct sock *sk)
{
 ip_select_ident_segs(net, skb, sk, 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __wsum inet_compute_pseudo(struct sk_buff *skb, int proto)
{
 return csum_tcpudp_nofold(ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
      skb->len, proto, 0);
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void iph_to_flow_copy_v4addrs(struct flow_keys *flow,
         const struct iphdr *iph)
{
 do { __attribute__((__noreturn__)) extern void __compiletime_assert_553(void) __attribute__((__error__("BUILD_BUG_ON failed: " "offsetof(typeof(flow->addrs), v4addrs.dst) != offsetof(typeof(flow->addrs), v4addrs.src) + sizeof(flow->addrs.v4addrs.src)"))); if (!(!(__builtin_offsetof(typeof(flow->addrs), v4addrs.dst) != __builtin_offsetof(typeof(flow->addrs), v4addrs.src) + sizeof(flow->addrs.v4addrs.src)))) __compiletime_assert_553(); } while (0);


 memcpy(&flow->addrs.v4addrs, &iph->addrs, sizeof(flow->addrs.v4addrs));
 flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_eth_mc_map(__be32 naddr, char *buf)
{
 __u32 addr=(__u32)(__builtin_constant_p(( __u32)(__be32)(naddr)) ? ((__u32)( (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(naddr)));
 buf[0]=0x01;
 buf[1]=0x00;
 buf[2]=0x5e;
 buf[5]=addr&0xFF;
 addr>>=8;
 buf[4]=addr&0xFF;
 addr>>=8;
 buf[3]=addr&0x7F;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_ib_mc_map(__be32 naddr, const unsigned char *broadcast, char *buf)
{
 __u32 addr;
 unsigned char scope = broadcast[5] & 0xF;

 buf[0] = 0;
 buf[1] = 0xff;
 buf[2] = 0xff;
 buf[3] = 0xff;
 addr = (__u32)(__builtin_constant_p(( __u32)(__be32)(naddr)) ? ((__u32)( (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(naddr)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(naddr)));
 buf[4] = 0xff;
 buf[5] = 0x10 | scope;
 buf[6] = 0x40;
 buf[7] = 0x1b;
 buf[8] = broadcast[8];
 buf[9] = broadcast[9];
 buf[10] = 0;
 buf[11] = 0;
 buf[12] = 0;
 buf[13] = 0;
 buf[14] = 0;
 buf[15] = 0;
 buf[19] = addr & 0xff;
 addr >>= 8;
 buf[18] = addr & 0xff;
 addr >>= 8;
 buf[17] = addr & 0xff;
 addr >>= 8;
 buf[16] = addr & 0x0f;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_ipgre_mc_map(__be32 naddr, const unsigned char *broadcast, char *buf)
{
 if ((broadcast[0] | broadcast[1] | broadcast[2] | broadcast[3]) != 0)
  memcpy(buf, broadcast, 4);
 else
  memcpy(buf, &naddr, sizeof(naddr));
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void inet_reset_saddr(struct sock *sk)
{
 _Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->sk.__sk_common.skc_rcv_saddr = _Generic(sk, const typeof(*(sk)) *: ((const struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })), default: ((struct inet_sock *)({ void *__mptr = (void *)(sk); _Static_assert(__builtin_types_compatible_p(typeof(*(sk)), typeof(((struct inet_sock *)0)->sk)) || __builtin_types_compatible_p(typeof(*(sk)), typeof(void)), "pointer type mismatch in container_of()"); ((struct inet_sock *)(__mptr - __builtin_offsetof(struct inet_sock, sk))); })) )->inet_saddr = 0;

 if (sk->__sk_common.skc_family == 10) {
  struct ipv6_pinfo *np = inet6_sk(sk);

  memset(&np->saddr, 0, sizeof(np->saddr));
  memset(&sk->__sk_common.skc_v6_rcv_saddr, 0, sizeof(sk->__sk_common.skc_v6_rcv_saddr));
 }

}



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ipv4_addr_hash(__be32 ip)
{
 return ( unsigned int) ip;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ipv4_portaddr_hash(const struct net *net,
         __be32 saddr,
         unsigned int port)
{
 return jhash_1word(( u32)saddr, net_hash_mix(net)) ^ port;
}

bool ip_call_ra_chain(struct sk_buff *skb);





enum ip_defrag_users {
 IP_DEFRAG_LOCAL_DELIVER,
 IP_DEFRAG_CALL_RA_CHAIN,
 IP_DEFRAG_CONNTRACK_IN,
 __IP_DEFRAG_CONNTRACK_IN_END = IP_DEFRAG_CONNTRACK_IN + ((unsigned short)~0U),
 IP_DEFRAG_CONNTRACK_OUT,
 __IP_DEFRAG_CONNTRACK_OUT_END = IP_DEFRAG_CONNTRACK_OUT + ((unsigned short)~0U),
 IP_DEFRAG_CONNTRACK_BRIDGE_IN,
 __IP_DEFRAG_CONNTRACK_BRIDGE_IN = IP_DEFRAG_CONNTRACK_BRIDGE_IN + ((unsigned short)~0U),
 IP_DEFRAG_VS_IN,
 IP_DEFRAG_VS_OUT,
 IP_DEFRAG_VS_FWD,
 IP_DEFRAG_AF_PACKET,
 IP_DEFRAG_MACVLAN,
};




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ip_defrag_user_in_between(u32 user,
          enum ip_defrag_users lower_bond,
          enum ip_defrag_users upper_bond)
{
 return user >= lower_bond && user <= upper_bond;
}

int ip_defrag(struct net *net, struct sk_buff *skb, u32 user);

struct sk_buff *ip_check_defrag(struct net *net, struct sk_buff *skb, u32 user);
# 737 "../include/net/ip.h"
int ip_forward(struct sk_buff *skb);





void ip_options_build(struct sk_buff *skb, struct ip_options *opt,
        __be32 daddr, struct rtable *rt);

int __ip_options_echo(struct net *net, struct ip_options *dopt,
        struct sk_buff *skb, const struct ip_options *sopt);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ip_options_echo(struct net *net, struct ip_options *dopt,
      struct sk_buff *skb)
{
 return __ip_options_echo(net, dopt, skb, &((struct inet_skb_parm*)((skb)->cb))->opt);
}

void ip_options_fragment(struct sk_buff *skb);
int __ip_options_compile(struct net *net, struct ip_options *opt,
    struct sk_buff *skb, __be32 *info);
int ip_options_compile(struct net *net, struct ip_options *opt,
         struct sk_buff *skb);
int ip_options_get(struct net *net, struct ip_options_rcu **optp,
     sockptr_t data, int optlen);
void ip_options_undo(struct ip_options *opt);
void ip_forward_options(struct sk_buff *skb);
int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);





void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb, bool drop_dst);
void ip_cmsg_recv_offset(struct msghdr *msg, struct sock *sk,
    struct sk_buff *skb, int tlen, int offset);
int ip_cmsg_send(struct sock *sk, struct msghdr *msg,
   struct ipcm_cookie *ipc, bool allow_ipv6);
extern struct static_key_false ip4_min_ttl;
int do_ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
       unsigned int optlen);
int ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
    unsigned int optlen);
int do_ip_getsockopt(struct sock *sk, int level, int optname,
       sockptr_t optval, sockptr_t optlen);
int ip_getsockopt(struct sock *sk, int level, int optname, char *optval,
    int *optlen);
int ip_ra_control(struct sock *sk, unsigned char on,
    void (*destructor)(struct sock *));

int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len);
void ip_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
     u32 info, u8 *payload);
void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
      u32 info);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
{
 ip_cmsg_recv_offset(msg, skb->sk, skb, 0, 0);
}

bool icmp_global_allow(void);
extern int sysctl_icmp_msgs_per_sec;
extern int sysctl_icmp_msgs_burst;


int ip_misc_proc_init(void);


int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family,
    struct netlink_ext_ack *extack);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool inetdev_valid_mtu(unsigned int mtu)
{
 return __builtin_expect(!!(mtu >= 68), 1);
}

void ip_sock_set_freebind(struct sock *sk);
int ip_sock_set_mtu_discover(struct sock *sk, int val);
void ip_sock_set_pktinfo(struct sock *sk);
void ip_sock_set_recverr(struct sock *sk);
void ip_sock_set_tos(struct sock *sk, int val);
void __ip_sock_set_tos(struct sock *sk, int val);
# 27 "../include/rdma/ib_verbs.h" 2






# 1 "../include/linux/mmu_notifier.h" 1
# 10 "../include/linux/mmu_notifier.h"
# 1 "../include/linux/interval_tree.h" 1






struct interval_tree_node {
 struct rb_node rb;
 unsigned long start;
 unsigned long last;
 unsigned long __subtree_last;
};

extern void
interval_tree_insert(struct interval_tree_node *node,
       struct rb_root_cached *root);

extern void
interval_tree_remove(struct interval_tree_node *node,
       struct rb_root_cached *root);

extern struct interval_tree_node *
interval_tree_iter_first(struct rb_root_cached *root,
    unsigned long start, unsigned long last);

extern struct interval_tree_node *
interval_tree_iter_next(struct interval_tree_node *node,
   unsigned long start, unsigned long last);
# 49 "../include/linux/interval_tree.h"
struct interval_tree_span_iter {

 struct interval_tree_node *nodes[2];
 unsigned long first_index;
 unsigned long last_index;


 union {
  unsigned long start_hole;
  unsigned long start_used;
 };
 union {
  unsigned long last_hole;
  unsigned long last_used;
 };
 int is_hole;
};

void interval_tree_span_iter_first(struct interval_tree_span_iter *state,
       struct rb_root_cached *itree,
       unsigned long first_index,
       unsigned long last_index);
void interval_tree_span_iter_advance(struct interval_tree_span_iter *iter,
         struct rb_root_cached *itree,
         unsigned long new_index);
void interval_tree_span_iter_next(struct interval_tree_span_iter *state);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
interval_tree_span_iter_done(struct interval_tree_span_iter *state)
{
 return state->is_hole == -1;
}
# 11 "../include/linux/mmu_notifier.h" 2

struct mmu_notifier_subscriptions;
struct mmu_notifier;
struct mmu_notifier_range;
struct mmu_interval_notifier;
# 51 "../include/linux/mmu_notifier.h"
enum mmu_notifier_event {
 MMU_NOTIFY_UNMAP = 0,
 MMU_NOTIFY_CLEAR,
 MMU_NOTIFY_PROTECTION_VMA,
 MMU_NOTIFY_PROTECTION_PAGE,
 MMU_NOTIFY_SOFT_DIRTY,
 MMU_NOTIFY_RELEASE,
 MMU_NOTIFY_MIGRATE,
 MMU_NOTIFY_EXCLUSIVE,
};



struct mmu_notifier_ops {
# 88 "../include/linux/mmu_notifier.h"
 void (*release)(struct mmu_notifier *subscription,
   struct mm_struct *mm);
# 100 "../include/linux/mmu_notifier.h"
 int (*clear_flush_young)(struct mmu_notifier *subscription,
     struct mm_struct *mm,
     unsigned long start,
     unsigned long end);






 int (*clear_young)(struct mmu_notifier *subscription,
      struct mm_struct *mm,
      unsigned long start,
      unsigned long end);







 int (*test_young)(struct mmu_notifier *subscription,
     struct mm_struct *mm,
     unsigned long address);
# 175 "../include/linux/mmu_notifier.h"
 int (*invalidate_range_start)(struct mmu_notifier *subscription,
          const struct mmu_notifier_range *range);
 void (*invalidate_range_end)(struct mmu_notifier *subscription,
         const struct mmu_notifier_range *range);
# 197 "../include/linux/mmu_notifier.h"
 void (*arch_invalidate_secondary_tlbs)(
     struct mmu_notifier *subscription,
     struct mm_struct *mm,
     unsigned long start,
     unsigned long end);
# 213 "../include/linux/mmu_notifier.h"
 struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm);
 void (*free_notifier)(struct mmu_notifier *subscription);
};
# 228 "../include/linux/mmu_notifier.h"
struct mmu_notifier {
 struct hlist_node hlist;
 const struct mmu_notifier_ops *ops;
 struct mm_struct *mm;
 struct callback_head rcu;
 unsigned int users;
};







struct mmu_interval_notifier_ops {
 bool (*invalidate)(struct mmu_interval_notifier *interval_sub,
      const struct mmu_notifier_range *range,
      unsigned long cur_seq);
};

struct mmu_interval_notifier {
 struct interval_tree_node interval_tree;
 const struct mmu_interval_notifier_ops *ops;
 struct mm_struct *mm;
 struct hlist_node deferred_item;
 unsigned long invalidate_seq;
};
# 568 "../include/linux/mmu_notifier.h"
struct mmu_notifier_range {
 unsigned long start;
 unsigned long end;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void _mmu_notifier_range_init(struct mmu_notifier_range *range,
         unsigned long start,
         unsigned long end)
{
 range->start = start;
 range->end = end;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool
mmu_notifier_range_blockable(const struct mmu_notifier_range *range)
{
 return true;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mm_has_notifiers(struct mm_struct *mm)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmu_notifier_release(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mmu_notifier_clear_flush_young(struct mm_struct *mm,
       unsigned long start,
       unsigned long end)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int mmu_notifier_test_young(struct mm_struct *mm,
       unsigned long address)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm,
      unsigned long start, unsigned long end)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmu_notifier_subscriptions_init(struct mm_struct *mm)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmu_notifier_subscriptions_destroy(struct mm_struct *mm)
{
}
# 654 "../include/linux/mmu_notifier.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void mmu_notifier_synchronize(void)
{
}
# 34 "../include/rdma/ib_verbs.h" 2

# 1 "../include/linux/cgroup_rdma.h" 1
# 11 "../include/linux/cgroup_rdma.h"
enum rdmacg_resource_type {
 RDMACG_RESOURCE_HCA_HANDLE,
 RDMACG_RESOURCE_HCA_OBJECT,
 RDMACG_RESOURCE_MAX,
};
# 36 "../include/rdma/ib_verbs.h" 2


# 1 "../include/linux/dim.h" 1
# 13 "../include/linux/dim.h"
struct net_device;
# 52 "../include/linux/dim.h"
struct dim_cq_moder {
 u16 usec;
 u16 pkts;
 u16 comps;
 u8 cq_period_mode;
 struct callback_head rcu;
};
# 80 "../include/linux/dim.h"
struct dim_irq_moder {
 u8 profile_flags;
 u8 coal_flags;
 u8 dim_rx_mode;
 u8 dim_tx_mode;
 struct dim_cq_moder *rx_profile;
 struct dim_cq_moder *tx_profile;
 void (*rx_dim_work)(struct work_struct *work);
 void (*tx_dim_work)(struct work_struct *work);
};
# 101 "../include/linux/dim.h"
struct dim_sample {
 ktime_t time;
 u32 pkt_ctr;
 u32 byte_ctr;
 u16 event_ctr;
 u32 comp_ctr;
};
# 119 "../include/linux/dim.h"
struct dim_stats {
 int ppms;
 int bpms;
 int epms;
 int cpms;
 int cpe_ratio;
};
# 144 "../include/linux/dim.h"
struct dim {
 u8 state;
 struct dim_stats prev_stats;
 struct dim_sample start_sample;
 struct dim_sample measuring_sample;
 struct work_struct work;
 void *priv;
 u8 profile_ix;
 u8 mode;
 u8 tune_state;
 u8 steps_right;
 u8 steps_left;
 u8 tired;
};
# 166 "../include/linux/dim.h"
enum dim_cq_period_mode {
 DIM_CQ_PERIOD_MODE_START_FROM_EQE = 0x0,
 DIM_CQ_PERIOD_MODE_START_FROM_CQE = 0x1,
 DIM_CQ_PERIOD_NUM_MODES
};
# 182 "../include/linux/dim.h"
enum dim_state {
 DIM_START_MEASURE,
 DIM_MEASURE_IN_PROGRESS,
 DIM_APPLY_NEW_PROFILE,
};
# 198 "../include/linux/dim.h"
enum dim_tune_state {
 DIM_PARKING_ON_TOP,
 DIM_PARKING_TIRED,
 DIM_GOING_RIGHT,
 DIM_GOING_LEFT,
};
# 214 "../include/linux/dim.h"
enum dim_stats_state {
 DIM_STATS_WORSE,
 DIM_STATS_SAME,
 DIM_STATS_BETTER,
};
# 230 "../include/linux/dim.h"
enum dim_step_result {
 DIM_STEPPED,
 DIM_TOO_TIRED,
 DIM_ON_EDGE,
};
# 248 "../include/linux/dim.h"
int net_dim_init_irq_moder(struct net_device *dev, u8 profile_flags,
      u8 coal_flags, u8 rx_mode, u8 tx_mode,
      void (*rx_dim_work)(struct work_struct *work),
      void (*tx_dim_work)(struct work_struct *work));





void net_dim_free_irq_moder(struct net_device *dev);







void net_dim_setting(struct net_device *dev, struct dim *dim, bool is_tx);





void net_dim_work_cancel(struct dim *dim);
# 280 "../include/linux/dim.h"
struct dim_cq_moder
net_dim_get_rx_irq_moder(struct net_device *dev, struct dim *dim);
# 290 "../include/linux/dim.h"
struct dim_cq_moder
net_dim_get_tx_irq_moder(struct net_device *dev, struct dim *dim);






void net_dim_set_rx_mode(struct net_device *dev, u8 rx_mode);






void net_dim_set_tx_mode(struct net_device *dev, u8 tx_mode);
# 315 "../include/linux/dim.h"
bool dim_on_top(struct dim *dim);
# 324 "../include/linux/dim.h"
void dim_turn(struct dim *dim);
# 333 "../include/linux/dim.h"
void dim_park_on_top(struct dim *dim);
# 342 "../include/linux/dim.h"
void dim_park_tired(struct dim *dim);
# 354 "../include/linux/dim.h"
bool dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
      struct dim_stats *curr_stats);
# 364 "../include/linux/dim.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
dim_update_sample(u16 event_ctr, u64 packets, u64 bytes, struct dim_sample *s)
{
 s->time = ktime_get();
 s->pkt_ctr = packets;
 s->byte_ctr = bytes;
 s->event_ctr = event_ctr;
}
# 382 "../include/linux/dim.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
dim_update_sample_with_comps(u16 event_ctr, u64 packets, u64 bytes, u64 comps,
        struct dim_sample *s)
{
 dim_update_sample(event_ctr, packets, bytes, s);
 s->comp_ctr = comps;
}
# 397 "../include/linux/dim.h"
struct dim_cq_moder net_dim_get_rx_moderation(u8 cq_period_mode, int ix);





struct dim_cq_moder net_dim_get_def_rx_moderation(u8 cq_period_mode);






struct dim_cq_moder net_dim_get_tx_moderation(u8 cq_period_mode, int ix);





struct dim_cq_moder net_dim_get_def_tx_moderation(u8 cq_period_mode);
# 427 "../include/linux/dim.h"
void net_dim(struct dim *dim, struct dim_sample end_sample);
# 448 "../include/linux/dim.h"
void rdma_dim(struct dim *dim, u64 completions);
# 39 "../include/rdma/ib_verbs.h" 2
# 1 "../include/uapi/rdma/ib_user_verbs.h" 1
# 49 "../include/uapi/rdma/ib_user_verbs.h"
enum ib_uverbs_write_cmds {
 IB_USER_VERBS_CMD_GET_CONTEXT,
 IB_USER_VERBS_CMD_QUERY_DEVICE,
 IB_USER_VERBS_CMD_QUERY_PORT,
 IB_USER_VERBS_CMD_ALLOC_PD,
 IB_USER_VERBS_CMD_DEALLOC_PD,
 IB_USER_VERBS_CMD_CREATE_AH,
 IB_USER_VERBS_CMD_MODIFY_AH,
 IB_USER_VERBS_CMD_QUERY_AH,
 IB_USER_VERBS_CMD_DESTROY_AH,
 IB_USER_VERBS_CMD_REG_MR,
 IB_USER_VERBS_CMD_REG_SMR,
 IB_USER_VERBS_CMD_REREG_MR,
 IB_USER_VERBS_CMD_QUERY_MR,
 IB_USER_VERBS_CMD_DEREG_MR,
 IB_USER_VERBS_CMD_ALLOC_MW,
 IB_USER_VERBS_CMD_BIND_MW,
 IB_USER_VERBS_CMD_DEALLOC_MW,
 IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL,
 IB_USER_VERBS_CMD_CREATE_CQ,
 IB_USER_VERBS_CMD_RESIZE_CQ,
 IB_USER_VERBS_CMD_DESTROY_CQ,
 IB_USER_VERBS_CMD_POLL_CQ,
 IB_USER_VERBS_CMD_PEEK_CQ,
 IB_USER_VERBS_CMD_REQ_NOTIFY_CQ,
 IB_USER_VERBS_CMD_CREATE_QP,
 IB_USER_VERBS_CMD_QUERY_QP,
 IB_USER_VERBS_CMD_MODIFY_QP,
 IB_USER_VERBS_CMD_DESTROY_QP,
 IB_USER_VERBS_CMD_POST_SEND,
 IB_USER_VERBS_CMD_POST_RECV,
 IB_USER_VERBS_CMD_ATTACH_MCAST,
 IB_USER_VERBS_CMD_DETACH_MCAST,
 IB_USER_VERBS_CMD_CREATE_SRQ,
 IB_USER_VERBS_CMD_MODIFY_SRQ,
 IB_USER_VERBS_CMD_QUERY_SRQ,
 IB_USER_VERBS_CMD_DESTROY_SRQ,
 IB_USER_VERBS_CMD_POST_SRQ_RECV,
 IB_USER_VERBS_CMD_OPEN_XRCD,
 IB_USER_VERBS_CMD_CLOSE_XRCD,
 IB_USER_VERBS_CMD_CREATE_XSRQ,
 IB_USER_VERBS_CMD_OPEN_QP,
};

enum {
 IB_USER_VERBS_EX_CMD_QUERY_DEVICE = IB_USER_VERBS_CMD_QUERY_DEVICE,
 IB_USER_VERBS_EX_CMD_CREATE_CQ = IB_USER_VERBS_CMD_CREATE_CQ,
 IB_USER_VERBS_EX_CMD_CREATE_QP = IB_USER_VERBS_CMD_CREATE_QP,
 IB_USER_VERBS_EX_CMD_MODIFY_QP = IB_USER_VERBS_CMD_MODIFY_QP,
 IB_USER_VERBS_EX_CMD_CREATE_FLOW = 50,
 IB_USER_VERBS_EX_CMD_DESTROY_FLOW,
 IB_USER_VERBS_EX_CMD_CREATE_WQ,
 IB_USER_VERBS_EX_CMD_MODIFY_WQ,
 IB_USER_VERBS_EX_CMD_DESTROY_WQ,
 IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL,
 IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL,
 IB_USER_VERBS_EX_CMD_MODIFY_CQ
};


enum ib_placement_type {
 IB_FLUSH_GLOBAL = 1U << 0,
 IB_FLUSH_PERSISTENT = 1U << 1,
};


enum ib_selectivity_level {
 IB_FLUSH_RANGE = 0,
 IB_FLUSH_MR,
};
# 131 "../include/uapi/rdma/ib_user_verbs.h"
struct ib_uverbs_async_event_desc {
 __u64 __attribute__((aligned(8))) element;
 __u32 event_type;
 __u32 reserved;
};

struct ib_uverbs_comp_event_desc {
 __u64 __attribute__((aligned(8))) cq_handle;
};

struct ib_uverbs_cq_moderation_caps {
 __u16 max_cq_moderation_count;
 __u16 max_cq_moderation_period;
 __u32 reserved;
};
# 158 "../include/uapi/rdma/ib_user_verbs.h"
struct ib_uverbs_cmd_hdr {
 __u32 command;
 __u16 in_words;
 __u16 out_words;
};

struct ib_uverbs_ex_cmd_hdr {
 __u64 __attribute__((aligned(8))) response;
 __u16 provider_in_words;
 __u16 provider_out_words;
 __u32 cmd_hdr_reserved;
};

struct ib_uverbs_get_context {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_get_context_resp {
 __u32 async_fd;
 __u32 num_comp_vectors;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_device {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_device_resp {
 __u64 __attribute__((aligned(8))) fw_ver;
 __be64 node_guid;
 __be64 sys_image_guid;
 __u64 __attribute__((aligned(8))) max_mr_size;
 __u64 __attribute__((aligned(8))) page_size_cap;
 __u32 vendor_id;
 __u32 vendor_part_id;
 __u32 hw_ver;
 __u32 max_qp;
 __u32 max_qp_wr;
 __u32 device_cap_flags;
 __u32 max_sge;
 __u32 max_sge_rd;
 __u32 max_cq;
 __u32 max_cqe;
 __u32 max_mr;
 __u32 max_pd;
 __u32 max_qp_rd_atom;
 __u32 max_ee_rd_atom;
 __u32 max_res_rd_atom;
 __u32 max_qp_init_rd_atom;
 __u32 max_ee_init_rd_atom;
 __u32 atomic_cap;
 __u32 max_ee;
 __u32 max_rdd;
 __u32 max_mw;
 __u32 max_raw_ipv6_qp;
 __u32 max_raw_ethy_qp;
 __u32 max_mcast_grp;
 __u32 max_mcast_qp_attach;
 __u32 max_total_mcast_qp_attach;
 __u32 max_ah;
 __u32 max_fmr;
 __u32 max_map_per_fmr;
 __u32 max_srq;
 __u32 max_srq_wr;
 __u32 max_srq_sge;
 __u16 max_pkeys;
 __u8 local_ca_ack_delay;
 __u8 phys_port_cnt;
 __u8 reserved[4];
};

struct ib_uverbs_ex_query_device {
 __u32 comp_mask;
 __u32 reserved;
};

struct ib_uverbs_odp_caps {
 __u64 __attribute__((aligned(8))) general_caps;
 struct {
  __u32 rc_odp_caps;
  __u32 uc_odp_caps;
  __u32 ud_odp_caps;
 } per_transport_caps;
 __u32 reserved;
};

struct ib_uverbs_rss_caps {




 __u32 supported_qpts;
 __u32 max_rwq_indirection_tables;
 __u32 max_rwq_indirection_table_size;
 __u32 reserved;
};

struct ib_uverbs_tm_caps {

 __u32 max_rndv_hdr_size;

 __u32 max_num_tags;

 __u32 flags;

 __u32 max_ops;

 __u32 max_sge;
 __u32 reserved;
};

struct ib_uverbs_ex_query_device_resp {
 struct ib_uverbs_query_device_resp base;
 __u32 comp_mask;
 __u32 response_length;
 struct ib_uverbs_odp_caps odp_caps;
 __u64 __attribute__((aligned(8))) timestamp_mask;
 __u64 __attribute__((aligned(8))) hca_core_clock;
 __u64 __attribute__((aligned(8))) device_cap_flags_ex;
 struct ib_uverbs_rss_caps rss_caps;
 __u32 max_wq_type_rq;
 __u32 raw_packet_caps;
 struct ib_uverbs_tm_caps tm_caps;
 struct ib_uverbs_cq_moderation_caps cq_moderation_caps;
 __u64 __attribute__((aligned(8))) max_dm_size;
 __u32 xrc_odp_caps;
 __u32 reserved;
};

struct ib_uverbs_query_port {
 __u64 __attribute__((aligned(8))) response;
 __u8 port_num;
 __u8 reserved[7];
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_port_resp {
 __u32 port_cap_flags;
 __u32 max_msg_sz;
 __u32 bad_pkey_cntr;
 __u32 qkey_viol_cntr;
 __u32 gid_tbl_len;
 __u16 pkey_tbl_len;
 __u16 lid;
 __u16 sm_lid;
 __u8 state;
 __u8 max_mtu;
 __u8 active_mtu;
 __u8 lmc;
 __u8 max_vl_num;
 __u8 sm_sl;
 __u8 subnet_timeout;
 __u8 init_type_reply;
 __u8 active_width;
 __u8 active_speed;
 __u8 phys_state;
 __u8 link_layer;
 __u8 flags;
 __u8 reserved;
};

struct ib_uverbs_alloc_pd {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_alloc_pd_resp {
 __u32 pd_handle;
 __u32 driver_data[];
};

struct ib_uverbs_dealloc_pd {
 __u32 pd_handle;
};

struct ib_uverbs_open_xrcd {
 __u64 __attribute__((aligned(8))) response;
 __u32 fd;
 __u32 oflags;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_open_xrcd_resp {
 __u32 xrcd_handle;
 __u32 driver_data[];
};

struct ib_uverbs_close_xrcd {
 __u32 xrcd_handle;
};

struct ib_uverbs_reg_mr {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) start;
 __u64 __attribute__((aligned(8))) length;
 __u64 __attribute__((aligned(8))) hca_va;
 __u32 pd_handle;
 __u32 access_flags;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_reg_mr_resp {
 __u32 mr_handle;
 __u32 lkey;
 __u32 rkey;
 __u32 driver_data[];
};

struct ib_uverbs_rereg_mr {
 __u64 __attribute__((aligned(8))) response;
 __u32 mr_handle;
 __u32 flags;
 __u64 __attribute__((aligned(8))) start;
 __u64 __attribute__((aligned(8))) length;
 __u64 __attribute__((aligned(8))) hca_va;
 __u32 pd_handle;
 __u32 access_flags;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_rereg_mr_resp {
 __u32 lkey;
 __u32 rkey;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_dereg_mr {
 __u32 mr_handle;
};

struct ib_uverbs_alloc_mw {
 __u64 __attribute__((aligned(8))) response;
 __u32 pd_handle;
 __u8 mw_type;
 __u8 reserved[3];
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_alloc_mw_resp {
 __u32 mw_handle;
 __u32 rkey;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_dealloc_mw {
 __u32 mw_handle;
};

struct ib_uverbs_create_comp_channel {
 __u64 __attribute__((aligned(8))) response;
};

struct ib_uverbs_create_comp_channel_resp {
 __u32 fd;
};

struct ib_uverbs_create_cq {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 cqe;
 __u32 comp_vector;
 __s32 comp_channel;
 __u32 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

enum ib_uverbs_ex_create_cq_flags {
 IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION = 1 << 0,
 IB_UVERBS_CQ_FLAGS_IGNORE_OVERRUN = 1 << 1,
};

struct ib_uverbs_ex_create_cq {
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 cqe;
 __u32 comp_vector;
 __s32 comp_channel;
 __u32 comp_mask;
 __u32 flags;
 __u32 reserved;
};

struct ib_uverbs_create_cq_resp {
 __u32 cq_handle;
 __u32 cqe;
 __u64 __attribute__((aligned(8))) driver_data[0];
};

struct ib_uverbs_ex_create_cq_resp {
 struct ib_uverbs_create_cq_resp base;
 __u32 comp_mask;
 __u32 response_length;
};

struct ib_uverbs_resize_cq {
 __u64 __attribute__((aligned(8))) response;
 __u32 cq_handle;
 __u32 cqe;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_resize_cq_resp {
 __u32 cqe;
 __u32 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_poll_cq {
 __u64 __attribute__((aligned(8))) response;
 __u32 cq_handle;
 __u32 ne;
};

enum ib_uverbs_wc_opcode {
 IB_UVERBS_WC_SEND = 0,
 IB_UVERBS_WC_RDMA_WRITE = 1,
 IB_UVERBS_WC_RDMA_READ = 2,
 IB_UVERBS_WC_COMP_SWAP = 3,
 IB_UVERBS_WC_FETCH_ADD = 4,
 IB_UVERBS_WC_BIND_MW = 5,
 IB_UVERBS_WC_LOCAL_INV = 6,
 IB_UVERBS_WC_TSO = 7,
 IB_UVERBS_WC_FLUSH = 8,
 IB_UVERBS_WC_ATOMIC_WRITE = 9,
};

struct ib_uverbs_wc {
 __u64 __attribute__((aligned(8))) wr_id;
 __u32 status;
 __u32 opcode;
 __u32 vendor_err;
 __u32 byte_len;
 union {
  __be32 imm_data;
  __u32 invalidate_rkey;
 } ex;
 __u32 qp_num;
 __u32 src_qp;
 __u32 wc_flags;
 __u16 pkey_index;
 __u16 slid;
 __u8 sl;
 __u8 dlid_path_bits;
 __u8 port_num;
 __u8 reserved;
};

struct ib_uverbs_poll_cq_resp {
 __u32 count;
 __u32 reserved;
 struct ib_uverbs_wc wc[];
};

struct ib_uverbs_req_notify_cq {
 __u32 cq_handle;
 __u32 solicited_only;
};

struct ib_uverbs_destroy_cq {
 __u64 __attribute__((aligned(8))) response;
 __u32 cq_handle;
 __u32 reserved;
};

struct ib_uverbs_destroy_cq_resp {
 __u32 comp_events_reported;
 __u32 async_events_reported;
};

struct ib_uverbs_global_route {
 __u8 dgid[16];
 __u32 flow_label;
 __u8 sgid_index;
 __u8 hop_limit;
 __u8 traffic_class;
 __u8 reserved;
};

struct ib_uverbs_ah_attr {
 struct ib_uverbs_global_route grh;
 __u16 dlid;
 __u8 sl;
 __u8 src_path_bits;
 __u8 static_rate;
 __u8 is_global;
 __u8 port_num;
 __u8 reserved;
};

struct ib_uverbs_qp_attr {
 __u32 qp_attr_mask;
 __u32 qp_state;
 __u32 cur_qp_state;
 __u32 path_mtu;
 __u32 path_mig_state;
 __u32 qkey;
 __u32 rq_psn;
 __u32 sq_psn;
 __u32 dest_qp_num;
 __u32 qp_access_flags;

 struct ib_uverbs_ah_attr ah_attr;
 struct ib_uverbs_ah_attr alt_ah_attr;


 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;

 __u16 pkey_index;
 __u16 alt_pkey_index;
 __u8 en_sqd_async_notify;
 __u8 sq_draining;
 __u8 max_rd_atomic;
 __u8 max_dest_rd_atomic;
 __u8 min_rnr_timer;
 __u8 port_num;
 __u8 timeout;
 __u8 retry_cnt;
 __u8 rnr_retry;
 __u8 alt_port_num;
 __u8 alt_timeout;
 __u8 reserved[5];
};

struct ib_uverbs_create_qp {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 send_cq_handle;
 __u32 recv_cq_handle;
 __u32 srq_handle;
 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;
 __u8 sq_sig_all;
 __u8 qp_type;
 __u8 is_srq;
 __u8 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

enum ib_uverbs_create_qp_mask {
 IB_UVERBS_CREATE_QP_MASK_IND_TABLE = 1UL << 0,
};

enum {
 IB_UVERBS_CREATE_QP_SUP_COMP_MASK = IB_UVERBS_CREATE_QP_MASK_IND_TABLE,
};

struct ib_uverbs_ex_create_qp {
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 send_cq_handle;
 __u32 recv_cq_handle;
 __u32 srq_handle;
 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;
 __u8 sq_sig_all;
 __u8 qp_type;
 __u8 is_srq;
 __u8 reserved;
 __u32 comp_mask;
 __u32 create_flags;
 __u32 rwq_ind_tbl_handle;
 __u32 source_qpn;
};

struct ib_uverbs_open_qp {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 qpn;
 __u8 qp_type;
 __u8 reserved[7];
 __u64 __attribute__((aligned(8))) driver_data[];
};


struct ib_uverbs_create_qp_resp {
 __u32 qp_handle;
 __u32 qpn;
 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;
 __u32 reserved;
 __u32 driver_data[0];
};

struct ib_uverbs_ex_create_qp_resp {
 struct ib_uverbs_create_qp_resp base;
 __u32 comp_mask;
 __u32 response_length;
};





struct ib_uverbs_qp_dest {
 __u8 dgid[16];
 __u32 flow_label;
 __u16 dlid;
 __u16 reserved;
 __u8 sgid_index;
 __u8 hop_limit;
 __u8 traffic_class;
 __u8 sl;
 __u8 src_path_bits;
 __u8 static_rate;
 __u8 is_global;
 __u8 port_num;
};

struct ib_uverbs_query_qp {
 __u64 __attribute__((aligned(8))) response;
 __u32 qp_handle;
 __u32 attr_mask;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_qp_resp {
 struct ib_uverbs_qp_dest dest;
 struct ib_uverbs_qp_dest alt_dest;
 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;
 __u32 qkey;
 __u32 rq_psn;
 __u32 sq_psn;
 __u32 dest_qp_num;
 __u32 qp_access_flags;
 __u16 pkey_index;
 __u16 alt_pkey_index;
 __u8 qp_state;
 __u8 cur_qp_state;
 __u8 path_mtu;
 __u8 path_mig_state;
 __u8 sq_draining;
 __u8 max_rd_atomic;
 __u8 max_dest_rd_atomic;
 __u8 min_rnr_timer;
 __u8 port_num;
 __u8 timeout;
 __u8 retry_cnt;
 __u8 rnr_retry;
 __u8 alt_port_num;
 __u8 alt_timeout;
 __u8 sq_sig_all;
 __u8 reserved[5];
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_modify_qp {
 struct ib_uverbs_qp_dest dest;
 struct ib_uverbs_qp_dest alt_dest;
 __u32 qp_handle;
 __u32 attr_mask;
 __u32 qkey;
 __u32 rq_psn;
 __u32 sq_psn;
 __u32 dest_qp_num;
 __u32 qp_access_flags;
 __u16 pkey_index;
 __u16 alt_pkey_index;
 __u8 qp_state;
 __u8 cur_qp_state;
 __u8 path_mtu;
 __u8 path_mig_state;
 __u8 en_sqd_async_notify;
 __u8 max_rd_atomic;
 __u8 max_dest_rd_atomic;
 __u8 min_rnr_timer;
 __u8 port_num;
 __u8 timeout;
 __u8 retry_cnt;
 __u8 rnr_retry;
 __u8 alt_port_num;
 __u8 alt_timeout;
 __u8 reserved[2];
 __u64 __attribute__((aligned(8))) driver_data[0];
};

struct ib_uverbs_ex_modify_qp {
 struct ib_uverbs_modify_qp base;
 __u32 rate_limit;
 __u32 reserved;
};

struct ib_uverbs_ex_modify_qp_resp {
 __u32 comp_mask;
 __u32 response_length;
};

struct ib_uverbs_destroy_qp {
 __u64 __attribute__((aligned(8))) response;
 __u32 qp_handle;
 __u32 reserved;
};

struct ib_uverbs_destroy_qp_resp {
 __u32 events_reported;
};







struct ib_uverbs_sge {
 __u64 __attribute__((aligned(8))) addr;
 __u32 length;
 __u32 lkey;
};

enum ib_uverbs_wr_opcode {
 IB_UVERBS_WR_RDMA_WRITE = 0,
 IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1,
 IB_UVERBS_WR_SEND = 2,
 IB_UVERBS_WR_SEND_WITH_IMM = 3,
 IB_UVERBS_WR_RDMA_READ = 4,
 IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5,
 IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6,
 IB_UVERBS_WR_LOCAL_INV = 7,
 IB_UVERBS_WR_BIND_MW = 8,
 IB_UVERBS_WR_SEND_WITH_INV = 9,
 IB_UVERBS_WR_TSO = 10,
 IB_UVERBS_WR_RDMA_READ_WITH_INV = 11,
 IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12,
 IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13,
 IB_UVERBS_WR_FLUSH = 14,
 IB_UVERBS_WR_ATOMIC_WRITE = 15,

};

struct ib_uverbs_send_wr {
 __u64 __attribute__((aligned(8))) wr_id;
 __u32 num_sge;
 __u32 opcode;
 __u32 send_flags;
 union {
  __be32 imm_data;
  __u32 invalidate_rkey;
 } ex;
 union {
  struct {
   __u64 __attribute__((aligned(8))) remote_addr;
   __u32 rkey;
   __u32 reserved;
  } rdma;
  struct {
   __u64 __attribute__((aligned(8))) remote_addr;
   __u64 __attribute__((aligned(8))) compare_add;
   __u64 __attribute__((aligned(8))) swap;
   __u32 rkey;
   __u32 reserved;
  } atomic;
  struct {
   __u32 ah;
   __u32 remote_qpn;
   __u32 remote_qkey;
   __u32 reserved;
  } ud;
 } wr;
};

struct ib_uverbs_post_send {
 __u64 __attribute__((aligned(8))) response;
 __u32 qp_handle;
 __u32 wr_count;
 __u32 sge_count;
 __u32 wqe_size;
 struct ib_uverbs_send_wr send_wr[];
};

struct ib_uverbs_post_send_resp {
 __u32 bad_wr;
};

struct ib_uverbs_recv_wr {
 __u64 __attribute__((aligned(8))) wr_id;
 __u32 num_sge;
 __u32 reserved;
};

struct ib_uverbs_post_recv {
 __u64 __attribute__((aligned(8))) response;
 __u32 qp_handle;
 __u32 wr_count;
 __u32 sge_count;
 __u32 wqe_size;
 struct ib_uverbs_recv_wr recv_wr[];
};

struct ib_uverbs_post_recv_resp {
 __u32 bad_wr;
};

struct ib_uverbs_post_srq_recv {
 __u64 __attribute__((aligned(8))) response;
 __u32 srq_handle;
 __u32 wr_count;
 __u32 sge_count;
 __u32 wqe_size;
 struct ib_uverbs_recv_wr recv[];
};

struct ib_uverbs_post_srq_recv_resp {
 __u32 bad_wr;
};

struct ib_uverbs_create_ah {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 reserved;
 struct ib_uverbs_ah_attr attr;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_create_ah_resp {
 __u32 ah_handle;
 __u32 driver_data[];
};

struct ib_uverbs_destroy_ah {
 __u32 ah_handle;
};

struct ib_uverbs_attach_mcast {
 __u8 gid[16];
 __u32 qp_handle;
 __u16 mlid;
 __u16 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_detach_mcast {
 __u8 gid[16];
 __u32 qp_handle;
 __u16 mlid;
 __u16 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_flow_spec_hdr {
 __u32 type;
 __u16 size;
 __u16 reserved;

 __u64 __attribute__((aligned(8))) flow_spec_data[0];
};

struct ib_uverbs_flow_eth_filter {
 __u8 dst_mac[6];
 __u8 src_mac[6];
 __be16 ether_type;
 __be16 vlan_tag;
};

struct ib_uverbs_flow_spec_eth {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_eth_filter val;
 struct ib_uverbs_flow_eth_filter mask;
};

struct ib_uverbs_flow_ipv4_filter {
 __be32 src_ip;
 __be32 dst_ip;
 __u8 proto;
 __u8 tos;
 __u8 ttl;
 __u8 flags;
};

struct ib_uverbs_flow_spec_ipv4 {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_ipv4_filter val;
 struct ib_uverbs_flow_ipv4_filter mask;
};

struct ib_uverbs_flow_tcp_udp_filter {
 __be16 dst_port;
 __be16 src_port;
};

struct ib_uverbs_flow_spec_tcp_udp {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_tcp_udp_filter val;
 struct ib_uverbs_flow_tcp_udp_filter mask;
};

struct ib_uverbs_flow_ipv6_filter {
 __u8 src_ip[16];
 __u8 dst_ip[16];
 __be32 flow_label;
 __u8 next_hdr;
 __u8 traffic_class;
 __u8 hop_limit;
 __u8 reserved;
};

struct ib_uverbs_flow_spec_ipv6 {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_ipv6_filter val;
 struct ib_uverbs_flow_ipv6_filter mask;
};

struct ib_uverbs_flow_spec_action_tag {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 __u32 tag_id;
 __u32 reserved1;
};

struct ib_uverbs_flow_spec_action_drop {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
};

struct ib_uverbs_flow_spec_action_handle {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 __u32 handle;
 __u32 reserved1;
};

struct ib_uverbs_flow_spec_action_count {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 __u32 handle;
 __u32 reserved1;
};

struct ib_uverbs_flow_tunnel_filter {
 __be32 tunnel_id;
};

struct ib_uverbs_flow_spec_tunnel {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_tunnel_filter val;
 struct ib_uverbs_flow_tunnel_filter mask;
};

struct ib_uverbs_flow_spec_esp_filter {
 __u32 spi;
 __u32 seq;
};

struct ib_uverbs_flow_spec_esp {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_spec_esp_filter val;
 struct ib_uverbs_flow_spec_esp_filter mask;
};

struct ib_uverbs_flow_gre_filter {
# 1101 "../include/uapi/rdma/ib_user_verbs.h"
 __be16 c_ks_res0_ver;
 __be16 protocol;
 __be32 key;
};

struct ib_uverbs_flow_spec_gre {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_gre_filter val;
 struct ib_uverbs_flow_gre_filter mask;
};

struct ib_uverbs_flow_mpls_filter {






 __be32 label;
};

struct ib_uverbs_flow_spec_mpls {
 union {
  struct ib_uverbs_flow_spec_hdr hdr;
  struct {
   __u32 type;
   __u16 size;
   __u16 reserved;
  };
 };
 struct ib_uverbs_flow_mpls_filter val;
 struct ib_uverbs_flow_mpls_filter mask;
};

struct ib_uverbs_flow_attr {
 __u32 type;
 __u16 size;
 __u16 priority;
 __u8 num_of_specs;
 __u8 reserved[2];
 __u8 port;
 __u32 flags;




 struct ib_uverbs_flow_spec_hdr flow_specs[];
};

struct ib_uverbs_create_flow {
 __u32 comp_mask;
 __u32 qp_handle;
 struct ib_uverbs_flow_attr flow_attr;
};

struct ib_uverbs_create_flow_resp {
 __u32 comp_mask;
 __u32 flow_handle;
};

struct ib_uverbs_destroy_flow {
 __u32 comp_mask;
 __u32 flow_handle;
};

struct ib_uverbs_create_srq {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 max_wr;
 __u32 max_sge;
 __u32 srq_limit;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_create_xsrq {
 __u64 __attribute__((aligned(8))) response;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 srq_type;
 __u32 pd_handle;
 __u32 max_wr;
 __u32 max_sge;
 __u32 srq_limit;
 __u32 max_num_tags;
 __u32 xrcd_handle;
 __u32 cq_handle;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_create_srq_resp {
 __u32 srq_handle;
 __u32 max_wr;
 __u32 max_sge;
 __u32 srqn;
 __u32 driver_data[];
};

struct ib_uverbs_modify_srq {
 __u32 srq_handle;
 __u32 attr_mask;
 __u32 max_wr;
 __u32 srq_limit;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_srq {
 __u64 __attribute__((aligned(8))) response;
 __u32 srq_handle;
 __u32 reserved;
 __u64 __attribute__((aligned(8))) driver_data[];
};

struct ib_uverbs_query_srq_resp {
 __u32 max_wr;
 __u32 max_sge;
 __u32 srq_limit;
 __u32 reserved;
};

struct ib_uverbs_destroy_srq {
 __u64 __attribute__((aligned(8))) response;
 __u32 srq_handle;
 __u32 reserved;
};

struct ib_uverbs_destroy_srq_resp {
 __u32 events_reported;
};

struct ib_uverbs_ex_create_wq {
 __u32 comp_mask;
 __u32 wq_type;
 __u64 __attribute__((aligned(8))) user_handle;
 __u32 pd_handle;
 __u32 cq_handle;
 __u32 max_wr;
 __u32 max_sge;
 __u32 create_flags;
 __u32 reserved;
};

struct ib_uverbs_ex_create_wq_resp {
 __u32 comp_mask;
 __u32 response_length;
 __u32 wq_handle;
 __u32 max_wr;
 __u32 max_sge;
 __u32 wqn;
};

struct ib_uverbs_ex_destroy_wq {
 __u32 comp_mask;
 __u32 wq_handle;
};

struct ib_uverbs_ex_destroy_wq_resp {
 __u32 comp_mask;
 __u32 response_length;
 __u32 events_reported;
 __u32 reserved;
};

struct ib_uverbs_ex_modify_wq {
 __u32 attr_mask;
 __u32 wq_handle;
 __u32 wq_state;
 __u32 curr_wq_state;
 __u32 flags;
 __u32 flags_mask;
};



struct ib_uverbs_ex_create_rwq_ind_table {
 __u32 comp_mask;
 __u32 log_ind_tbl_size;




 __u32 wq_handles[];
};

struct ib_uverbs_ex_create_rwq_ind_table_resp {
 __u32 comp_mask;
 __u32 response_length;
 __u32 ind_tbl_handle;
 __u32 ind_tbl_num;
};

struct ib_uverbs_ex_destroy_rwq_ind_table {
 __u32 comp_mask;
 __u32 ind_tbl_handle;
};

struct ib_uverbs_cq_moderation {
 __u16 cq_count;
 __u16 cq_period;
};

struct ib_uverbs_ex_modify_cq {
 __u32 cq_handle;
 __u32 attr_mask;
 struct ib_uverbs_cq_moderation attr;
 __u32 reserved;
};







enum ib_uverbs_device_cap_flags {
 IB_UVERBS_DEVICE_RESIZE_MAX_WR = 1 << 0,
 IB_UVERBS_DEVICE_BAD_PKEY_CNTR = 1 << 1,
 IB_UVERBS_DEVICE_BAD_QKEY_CNTR = 1 << 2,
 IB_UVERBS_DEVICE_RAW_MULTI = 1 << 3,
 IB_UVERBS_DEVICE_AUTO_PATH_MIG = 1 << 4,
 IB_UVERBS_DEVICE_CHANGE_PHY_PORT = 1 << 5,
 IB_UVERBS_DEVICE_UD_AV_PORT_ENFORCE = 1 << 6,
 IB_UVERBS_DEVICE_CURR_QP_STATE_MOD = 1 << 7,
 IB_UVERBS_DEVICE_SHUTDOWN_PORT = 1 << 8,

 IB_UVERBS_DEVICE_PORT_ACTIVE_EVENT = 1 << 10,
 IB_UVERBS_DEVICE_SYS_IMAGE_GUID = 1 << 11,
 IB_UVERBS_DEVICE_RC_RNR_NAK_GEN = 1 << 12,
 IB_UVERBS_DEVICE_SRQ_RESIZE = 1 << 13,
 IB_UVERBS_DEVICE_N_NOTIFY_CQ = 1 << 14,
 IB_UVERBS_DEVICE_MEM_WINDOW = 1 << 17,
 IB_UVERBS_DEVICE_UD_IP_CSUM = 1 << 18,
 IB_UVERBS_DEVICE_XRC = 1 << 20,
 IB_UVERBS_DEVICE_MEM_MGT_EXTENSIONS = 1 << 21,
 IB_UVERBS_DEVICE_MEM_WINDOW_TYPE_2A = 1 << 23,
 IB_UVERBS_DEVICE_MEM_WINDOW_TYPE_2B = 1 << 24,
 IB_UVERBS_DEVICE_RC_IP_CSUM = 1 << 25,

 IB_UVERBS_DEVICE_RAW_IP_CSUM = 1 << 26,
 IB_UVERBS_DEVICE_MANAGED_FLOW_STEERING = 1 << 29,

 IB_UVERBS_DEVICE_RAW_SCATTER_FCS = 1ULL << 34,
 IB_UVERBS_DEVICE_PCI_WRITE_END_PADDING = 1ULL << 36,

 IB_UVERBS_DEVICE_FLUSH_GLOBAL = 1ULL << 38,
 IB_UVERBS_DEVICE_FLUSH_PERSISTENT = 1ULL << 39,

 IB_UVERBS_DEVICE_ATOMIC_WRITE = 1ULL << 40,
};

enum ib_uverbs_raw_packet_caps {
 IB_UVERBS_RAW_PACKET_CAP_CVLAN_STRIPPING = 1 << 0,
 IB_UVERBS_RAW_PACKET_CAP_SCATTER_FCS = 1 << 1,
 IB_UVERBS_RAW_PACKET_CAP_IP_CSUM = 1 << 2,
 IB_UVERBS_RAW_PACKET_CAP_DELAY_DROP = 1 << 3,
};
# 40 "../include/rdma/ib_verbs.h" 2
# 1 "../include/rdma/rdma_counter.h" 1
# 10 "../include/rdma/rdma_counter.h"
# 1 "../include/linux/pid_namespace.h" 1
# 17 "../include/linux/pid_namespace.h"
struct fs_pin;
# 26 "../include/linux/pid_namespace.h"
struct pid_namespace {
 struct idr idr;
 struct callback_head rcu;
 unsigned int pid_allocated;
 struct task_struct *child_reaper;
 struct kmem_cache *pid_cachep;
 unsigned int level;
 struct pid_namespace *parent;



 struct user_namespace *user_ns;
 struct ucounts *ucounts;
 int reboot;
 struct ns_common ns;

 int memfd_noexec_scope;

} ;

extern struct pid_namespace init_pid_ns;




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct pid_namespace *get_pid_ns(struct pid_namespace *ns)
{
 if (ns != &init_pid_ns)
  refcount_inc(&ns->ns.count);
 return ns;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int pidns_memfd_noexec_scope(struct pid_namespace *ns)
{
 int scope = 0;

 for (; ns; ns = ns->parent)
  scope = __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((scope) - (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); }))) * 0l)) : (int *)8))), ((scope) > (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) ? (scope) : (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); }))), ({ _Static_assert((__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(scope))(-1)) < ( typeof(scope))1)) * 0l)) : (int *)8))), (((typeof(scope))(-1)) < ( typeof(scope))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))1)) * 0l)) : (int *)8))), (((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))1), 0) || __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((scope) + 0))(-1)) < ( typeof((scope) + 0))1)) * 0l)) : (int *)8))), (((typeof((scope) + 0))(-1)) < ( typeof((scope) + 0))1), 0) == __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) + 0))(-1)) < ( typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) + 0))1)) * 0l)) : (int *)8))), (((typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) + 0))(-1)) < ( typeof((({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) + 0))1), 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(scope) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(scope))(-1)) < ( typeof(scope))1)) * 0l)) : (int *)8))), (((typeof(scope))(-1)) < ( typeof(scope))1), 0), scope, -1) >= 0) || (__builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })) * 0l)) : (int *)8))) && __builtin_choose_expr((sizeof(int) == sizeof(*(8 ? ((void *)((long)((((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))1)) * 0l)) : (int *)8))), (((typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))(-1)) < ( typeof(({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })))1), 0), ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); }), -1) >= 0)), "max" "(" "scope" ", " "({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__(\"Unsupported access size for {READ,WRITE}_ONCE().\"))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })" ") signedness error, fix types or consider u" "max" "() before " "max" "_t()"); ({ __auto_type __UNIQUE_ID_x_555 = (scope); __auto_type __UNIQUE_ID_y_556 = (({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_554(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(ns->memfd_noexec_scope) == sizeof(char) || sizeof(ns->memfd_noexec_scope) == sizeof(short) || sizeof(ns->memfd_noexec_scope) == sizeof(int) || sizeof(ns->memfd_noexec_scope) == sizeof(long)) || sizeof(ns->memfd_noexec_scope) == sizeof(long long))) __compiletime_assert_554(); } while (0); (*(const volatile typeof( _Generic((ns->memfd_noexec_scope), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (ns->memfd_noexec_scope))) *)&(ns->memfd_noexec_scope)); })); ((__UNIQUE_ID_x_555) > (__UNIQUE_ID_y_556) ? (__UNIQUE_ID_x_555) : (__UNIQUE_ID_y_556)); }); }));

 return scope;
}







extern struct pid_namespace *copy_pid_ns(unsigned long flags,
 struct user_namespace *user_ns, struct pid_namespace *ns);
extern void zap_pid_ns_processes(struct pid_namespace *pid_ns);
extern int reboot_pid_ns(struct pid_namespace *pid_ns, int cmd);
extern void put_pid_ns(struct pid_namespace *ns);
# 117 "../include/linux/pid_namespace.h"
extern struct pid_namespace *task_active_pid_ns(struct task_struct *tsk);
void pidhash_init(void);
void pid_idr_init(void);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool task_is_in_init_pid_ns(struct task_struct *tsk)
{
 return task_active_pid_ns(tsk) == &init_pid_ns;
}
# 11 "../include/rdma/rdma_counter.h" 2

# 1 "../include/rdma/restrack.h" 1
# 14 "../include/rdma/restrack.h"
# 1 "../include/uapi/rdma/rdma_netlink.h" 1






enum {
 RDMA_NL_IWCM = 2,
 RDMA_NL_RSVD,
 RDMA_NL_LS,
 RDMA_NL_NLDEV,
 RDMA_NL_NUM_CLIENTS
};

enum {
 RDMA_NL_GROUP_IWPM = 2,
 RDMA_NL_GROUP_LS,
 RDMA_NL_NUM_GROUPS
};
# 32 "../include/uapi/rdma/rdma_netlink.h"
enum {


 IWPM_FLAGS_NO_PORT_MAP = (1 << 0),
};


enum {
 RDMA_NL_IWPM_REG_PID = 0,
 RDMA_NL_IWPM_ADD_MAPPING,
 RDMA_NL_IWPM_QUERY_MAPPING,
 RDMA_NL_IWPM_REMOVE_MAPPING,
 RDMA_NL_IWPM_REMOTE_INFO,
 RDMA_NL_IWPM_HANDLE_ERR,
 RDMA_NL_IWPM_MAPINFO,
 RDMA_NL_IWPM_MAPINFO_NUM,
 RDMA_NL_IWPM_HELLO,
 RDMA_NL_IWPM_NUM_OPS
};

enum {
 IWPM_NLA_REG_PID_UNSPEC = 0,
 IWPM_NLA_REG_PID_SEQ,
 IWPM_NLA_REG_IF_NAME,
 IWPM_NLA_REG_IBDEV_NAME,
 IWPM_NLA_REG_ULIB_NAME,
 IWPM_NLA_REG_PID_MAX
};

enum {
 IWPM_NLA_RREG_PID_UNSPEC = 0,
 IWPM_NLA_RREG_PID_SEQ,
 IWPM_NLA_RREG_IBDEV_NAME,
 IWPM_NLA_RREG_ULIB_NAME,
 IWPM_NLA_RREG_ULIB_VER,
 IWPM_NLA_RREG_PID_ERR,
 IWPM_NLA_RREG_PID_MAX

};

enum {
 IWPM_NLA_MANAGE_MAPPING_UNSPEC = 0,
 IWPM_NLA_MANAGE_MAPPING_SEQ,
 IWPM_NLA_MANAGE_ADDR,
 IWPM_NLA_MANAGE_FLAGS,
 IWPM_NLA_MANAGE_MAPPING_MAX
};

enum {
 IWPM_NLA_RMANAGE_MAPPING_UNSPEC = 0,
 IWPM_NLA_RMANAGE_MAPPING_SEQ,
 IWPM_NLA_RMANAGE_ADDR,
 IWPM_NLA_RMANAGE_MAPPED_LOC_ADDR,

 IWPM_NLA_MANAGE_MAPPED_LOC_ADDR = IWPM_NLA_RMANAGE_MAPPED_LOC_ADDR,
 IWPM_NLA_RMANAGE_MAPPING_ERR,
 IWPM_NLA_RMANAGE_MAPPING_MAX
};




enum {
 IWPM_NLA_QUERY_MAPPING_UNSPEC = 0,
 IWPM_NLA_QUERY_MAPPING_SEQ,
 IWPM_NLA_QUERY_LOCAL_ADDR,
 IWPM_NLA_QUERY_REMOTE_ADDR,
 IWPM_NLA_QUERY_FLAGS,
 IWPM_NLA_QUERY_MAPPING_MAX,
};

enum {
 IWPM_NLA_RQUERY_MAPPING_UNSPEC = 0,
 IWPM_NLA_RQUERY_MAPPING_SEQ,
 IWPM_NLA_RQUERY_LOCAL_ADDR,
 IWPM_NLA_RQUERY_REMOTE_ADDR,
 IWPM_NLA_RQUERY_MAPPED_LOC_ADDR,
 IWPM_NLA_RQUERY_MAPPED_REM_ADDR,
 IWPM_NLA_RQUERY_MAPPING_ERR,
 IWPM_NLA_RQUERY_MAPPING_MAX
};

enum {
 IWPM_NLA_MAPINFO_REQ_UNSPEC = 0,
 IWPM_NLA_MAPINFO_ULIB_NAME,
 IWPM_NLA_MAPINFO_ULIB_VER,
 IWPM_NLA_MAPINFO_REQ_MAX
};

enum {
 IWPM_NLA_MAPINFO_UNSPEC = 0,
 IWPM_NLA_MAPINFO_LOCAL_ADDR,
 IWPM_NLA_MAPINFO_MAPPED_ADDR,
 IWPM_NLA_MAPINFO_FLAGS,
 IWPM_NLA_MAPINFO_MAX
};

enum {
 IWPM_NLA_MAPINFO_NUM_UNSPEC = 0,
 IWPM_NLA_MAPINFO_SEQ,
 IWPM_NLA_MAPINFO_SEND_NUM,
 IWPM_NLA_MAPINFO_ACK_NUM,
 IWPM_NLA_MAPINFO_NUM_MAX
};

enum {
 IWPM_NLA_ERR_UNSPEC = 0,
 IWPM_NLA_ERR_SEQ,
 IWPM_NLA_ERR_CODE,
 IWPM_NLA_ERR_MAX
};

enum {
 IWPM_NLA_HELLO_UNSPEC = 0,
 IWPM_NLA_HELLO_ABI_VERSION,
 IWPM_NLA_HELLO_MAX
};


enum {

 RDMA_NODE_IB_CA = 1,
 RDMA_NODE_IB_SWITCH,
 RDMA_NODE_IB_ROUTER,
 RDMA_NODE_RNIC,
 RDMA_NODE_USNIC,
 RDMA_NODE_USNIC_UDP,
 RDMA_NODE_UNSPECIFIED,
};







enum {
 RDMA_NL_LS_OP_RESOLVE = 0,
 RDMA_NL_LS_OP_SET_TIMEOUT,
 RDMA_NL_LS_OP_IP_RESOLVE,
 RDMA_NL_LS_NUM_OPS
};
# 194 "../include/uapi/rdma/rdma_netlink.h"
enum {
 LS_RESOLVE_PATH_USE_ALL = 0,
 LS_RESOLVE_PATH_USE_UNIDIRECTIONAL,
 LS_RESOLVE_PATH_USE_GMP,
 LS_RESOLVE_PATH_USE_MAX
};



struct rdma_ls_resolve_header {
 __u8 device_name[64];
 __u8 port_num;
 __u8 path_use;
};

struct rdma_ls_ip_resolve_header {
 __u32 ifindex;
};
# 233 "../include/uapi/rdma/rdma_netlink.h"
enum {
 LS_NLA_TYPE_UNSPEC = 0,
 LS_NLA_TYPE_PATH_RECORD,
 LS_NLA_TYPE_TIMEOUT,
 LS_NLA_TYPE_SERVICE_ID,
 LS_NLA_TYPE_DGID,
 LS_NLA_TYPE_SGID,
 LS_NLA_TYPE_TCLASS,
 LS_NLA_TYPE_PKEY,
 LS_NLA_TYPE_QOS_CLASS,
 LS_NLA_TYPE_IPV4,
 LS_NLA_TYPE_IPV6,
 LS_NLA_TYPE_MAX
};


struct rdma_nla_ls_gid {
 __u8 gid[16];
};

enum rdma_nldev_command {
 RDMA_NLDEV_CMD_UNSPEC,

 RDMA_NLDEV_CMD_GET,
 RDMA_NLDEV_CMD_SET,

 RDMA_NLDEV_CMD_NEWLINK,

 RDMA_NLDEV_CMD_DELLINK,

 RDMA_NLDEV_CMD_PORT_GET,

 RDMA_NLDEV_CMD_SYS_GET,
 RDMA_NLDEV_CMD_SYS_SET,



 RDMA_NLDEV_CMD_RES_GET = 9,

 RDMA_NLDEV_CMD_RES_QP_GET,

 RDMA_NLDEV_CMD_RES_CM_ID_GET,

 RDMA_NLDEV_CMD_RES_CQ_GET,

 RDMA_NLDEV_CMD_RES_MR_GET,

 RDMA_NLDEV_CMD_RES_PD_GET,

 RDMA_NLDEV_CMD_GET_CHARDEV,

 RDMA_NLDEV_CMD_STAT_SET,

 RDMA_NLDEV_CMD_STAT_GET,

 RDMA_NLDEV_CMD_STAT_DEL,

 RDMA_NLDEV_CMD_RES_QP_GET_RAW,

 RDMA_NLDEV_CMD_RES_CQ_GET_RAW,

 RDMA_NLDEV_CMD_RES_MR_GET_RAW,

 RDMA_NLDEV_CMD_RES_CTX_GET,

 RDMA_NLDEV_CMD_RES_SRQ_GET,

 RDMA_NLDEV_CMD_STAT_GET_STATUS,

 RDMA_NLDEV_CMD_RES_SRQ_GET_RAW,

 RDMA_NLDEV_CMD_NEWDEV,

 RDMA_NLDEV_CMD_DELDEV,

 RDMA_NLDEV_NUM_OPS
};

enum rdma_nldev_print_type {
 RDMA_NLDEV_PRINT_TYPE_UNSPEC,
 RDMA_NLDEV_PRINT_TYPE_HEX,
};

enum rdma_nldev_attr {

 RDMA_NLDEV_ATTR_UNSPEC,


 RDMA_NLDEV_ATTR_PAD = RDMA_NLDEV_ATTR_UNSPEC,


 RDMA_NLDEV_ATTR_DEV_INDEX,

 RDMA_NLDEV_ATTR_DEV_NAME,
# 336 "../include/uapi/rdma/rdma_netlink.h"
 RDMA_NLDEV_ATTR_PORT_INDEX,







 RDMA_NLDEV_ATTR_CAP_FLAGS,




 RDMA_NLDEV_ATTR_FW_VERSION,




 RDMA_NLDEV_ATTR_NODE_GUID,






 RDMA_NLDEV_ATTR_SYS_IMAGE_GUID,




 RDMA_NLDEV_ATTR_SUBNET_PREFIX,






 RDMA_NLDEV_ATTR_LID,
 RDMA_NLDEV_ATTR_SM_LID,




 RDMA_NLDEV_ATTR_LMC,

 RDMA_NLDEV_ATTR_PORT_STATE,
 RDMA_NLDEV_ATTR_PORT_PHYS_STATE,

 RDMA_NLDEV_ATTR_DEV_NODE_TYPE,

 RDMA_NLDEV_ATTR_RES_SUMMARY,
 RDMA_NLDEV_ATTR_RES_SUMMARY_ENTRY,
 RDMA_NLDEV_ATTR_RES_SUMMARY_ENTRY_NAME,
 RDMA_NLDEV_ATTR_RES_SUMMARY_ENTRY_CURR,

 RDMA_NLDEV_ATTR_RES_QP,
 RDMA_NLDEV_ATTR_RES_QP_ENTRY,



 RDMA_NLDEV_ATTR_RES_LQPN,




 RDMA_NLDEV_ATTR_RES_RQPN,




 RDMA_NLDEV_ATTR_RES_RQ_PSN,



 RDMA_NLDEV_ATTR_RES_SQ_PSN,
 RDMA_NLDEV_ATTR_RES_PATH_MIG_STATE,




 RDMA_NLDEV_ATTR_RES_TYPE,
 RDMA_NLDEV_ATTR_RES_STATE,




 RDMA_NLDEV_ATTR_RES_PID,






 RDMA_NLDEV_ATTR_RES_KERN_NAME,

 RDMA_NLDEV_ATTR_RES_CM_ID,
 RDMA_NLDEV_ATTR_RES_CM_ID_ENTRY,



 RDMA_NLDEV_ATTR_RES_PS,



 RDMA_NLDEV_ATTR_RES_SRC_ADDR,
 RDMA_NLDEV_ATTR_RES_DST_ADDR,

 RDMA_NLDEV_ATTR_RES_CQ,
 RDMA_NLDEV_ATTR_RES_CQ_ENTRY,
 RDMA_NLDEV_ATTR_RES_CQE,
 RDMA_NLDEV_ATTR_RES_USECNT,
 RDMA_NLDEV_ATTR_RES_POLL_CTX,

 RDMA_NLDEV_ATTR_RES_MR,
 RDMA_NLDEV_ATTR_RES_MR_ENTRY,
 RDMA_NLDEV_ATTR_RES_RKEY,
 RDMA_NLDEV_ATTR_RES_LKEY,
 RDMA_NLDEV_ATTR_RES_IOVA,
 RDMA_NLDEV_ATTR_RES_MRLEN,

 RDMA_NLDEV_ATTR_RES_PD,
 RDMA_NLDEV_ATTR_RES_PD_ENTRY,
 RDMA_NLDEV_ATTR_RES_LOCAL_DMA_LKEY,
 RDMA_NLDEV_ATTR_RES_UNSAFE_GLOBAL_RKEY,
# 470 "../include/uapi/rdma/rdma_netlink.h"
 RDMA_NLDEV_ATTR_NDEV_INDEX,
 RDMA_NLDEV_ATTR_NDEV_NAME,



 RDMA_NLDEV_ATTR_DRIVER,
 RDMA_NLDEV_ATTR_DRIVER_ENTRY,
 RDMA_NLDEV_ATTR_DRIVER_STRING,



 RDMA_NLDEV_ATTR_DRIVER_PRINT_TYPE,
 RDMA_NLDEV_ATTR_DRIVER_S32,
 RDMA_NLDEV_ATTR_DRIVER_U32,
 RDMA_NLDEV_ATTR_DRIVER_S64,
 RDMA_NLDEV_ATTR_DRIVER_U64,





 RDMA_NLDEV_ATTR_RES_PDN,
 RDMA_NLDEV_ATTR_RES_CQN,
 RDMA_NLDEV_ATTR_RES_MRN,
 RDMA_NLDEV_ATTR_RES_CM_IDN,
 RDMA_NLDEV_ATTR_RES_CTXN,



 RDMA_NLDEV_ATTR_LINK_TYPE,





 RDMA_NLDEV_SYS_ATTR_NETNS_MODE,



 RDMA_NLDEV_ATTR_DEV_PROTOCOL,




 RDMA_NLDEV_NET_NS_FD,







 RDMA_NLDEV_ATTR_CHARDEV_TYPE,
 RDMA_NLDEV_ATTR_CHARDEV_NAME,
 RDMA_NLDEV_ATTR_CHARDEV_ABI,
 RDMA_NLDEV_ATTR_CHARDEV,
 RDMA_NLDEV_ATTR_UVERBS_DRIVER_ID,



 RDMA_NLDEV_ATTR_STAT_MODE,
 RDMA_NLDEV_ATTR_STAT_RES,
 RDMA_NLDEV_ATTR_STAT_AUTO_MODE_MASK,
 RDMA_NLDEV_ATTR_STAT_COUNTER,
 RDMA_NLDEV_ATTR_STAT_COUNTER_ENTRY,
 RDMA_NLDEV_ATTR_STAT_COUNTER_ID,
 RDMA_NLDEV_ATTR_STAT_HWCOUNTERS,
 RDMA_NLDEV_ATTR_STAT_HWCOUNTER_ENTRY,
 RDMA_NLDEV_ATTR_STAT_HWCOUNTER_ENTRY_NAME,
 RDMA_NLDEV_ATTR_STAT_HWCOUNTER_ENTRY_VALUE,




 RDMA_NLDEV_ATTR_DEV_DIM,

 RDMA_NLDEV_ATTR_RES_RAW,

 RDMA_NLDEV_ATTR_RES_CTX,
 RDMA_NLDEV_ATTR_RES_CTX_ENTRY,

 RDMA_NLDEV_ATTR_RES_SRQ,
 RDMA_NLDEV_ATTR_RES_SRQ_ENTRY,
 RDMA_NLDEV_ATTR_RES_SRQN,

 RDMA_NLDEV_ATTR_MIN_RANGE,
 RDMA_NLDEV_ATTR_MAX_RANGE,

 RDMA_NLDEV_SYS_ATTR_COPY_ON_FORK,

 RDMA_NLDEV_ATTR_STAT_HWCOUNTER_INDEX,
 RDMA_NLDEV_ATTR_STAT_HWCOUNTER_DYNAMIC,

 RDMA_NLDEV_SYS_ATTR_PRIVILEGED_QKEY_MODE,

 RDMA_NLDEV_ATTR_DRIVER_DETAILS,



 RDMA_NLDEV_ATTR_RES_SUBTYPE,

 RDMA_NLDEV_ATTR_DEV_TYPE,

 RDMA_NLDEV_ATTR_PARENT_NAME,

 RDMA_NLDEV_ATTR_NAME_ASSIGN_TYPE,




 RDMA_NLDEV_ATTR_MAX
};




enum rdma_nl_counter_mode {
 RDMA_COUNTER_MODE_NONE,





 RDMA_COUNTER_MODE_AUTO,





 RDMA_COUNTER_MODE_MANUAL,




 RDMA_COUNTER_MODE_MAX,
};





enum rdma_nl_counter_mask {
 RDMA_COUNTER_MASK_QP_TYPE = 1,
 RDMA_COUNTER_MASK_PID = 1 << 1,
};


enum rdma_nl_dev_type {
 RDMA_DEVICE_TYPE_SMI = 1,
};


enum rdma_nl_name_assign_type {
 RDMA_NAME_ASSIGN_TYPE_UNKNOWN = 0,
 RDMA_NAME_ASSIGN_TYPE_USER = 1,
};
# 15 "../include/rdma/restrack.h" 2





struct ib_device;
struct sk_buff;




enum rdma_restrack_type {



 RDMA_RESTRACK_PD,



 RDMA_RESTRACK_CQ,



 RDMA_RESTRACK_QP,



 RDMA_RESTRACK_CM_ID,



 RDMA_RESTRACK_MR,



 RDMA_RESTRACK_CTX,



 RDMA_RESTRACK_COUNTER,



 RDMA_RESTRACK_SRQ,



 RDMA_RESTRACK_MAX
};




struct rdma_restrack_entry {
# 77 "../include/rdma/restrack.h"
 bool valid;







 u8 no_track : 1;



 struct kref kref;



 struct completion comp;
# 103 "../include/rdma/restrack.h"
 struct task_struct *task;



 const char *kern_name;



 enum rdma_restrack_type type;



 bool user;



 u32 id;
};

int rdma_restrack_count(struct ib_device *dev, enum rdma_restrack_type type,
   bool show_details);




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_is_kernel_res(const struct rdma_restrack_entry *res)
{
 return !res->user;
}





int __attribute__((__warn_unused_result__)) rdma_restrack_get(struct rdma_restrack_entry *res);





int rdma_restrack_put(struct rdma_restrack_entry *res);





int rdma_nl_put_driver_u32(struct sk_buff *msg, const char *name, u32 value);
int rdma_nl_put_driver_u32_hex(struct sk_buff *msg, const char *name,
          u32 value);
int rdma_nl_put_driver_u64(struct sk_buff *msg, const char *name, u64 value);
int rdma_nl_put_driver_u64_hex(struct sk_buff *msg, const char *name,
          u64 value);
int rdma_nl_put_driver_string(struct sk_buff *msg, const char *name,
         const char *str);
int rdma_nl_stat_hwcounter_entry(struct sk_buff *msg, const char *name,
     u64 value);

struct rdma_restrack_entry *rdma_restrack_get_byid(struct ib_device *dev,
         enum rdma_restrack_type type,
         u32 id);
# 171 "../include/rdma/restrack.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_restrack_no_track(struct rdma_restrack_entry *res)
{
 res->no_track = true;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_restrack_is_tracked(struct rdma_restrack_entry *res)
{
 return !res->no_track;
}
# 13 "../include/rdma/rdma_counter.h" 2
# 1 "../include/rdma/rdma_netlink.h" 1








enum {
 RDMA_NLDEV_ATTR_EMPTY_STRING = 1,
 RDMA_NLDEV_ATTR_ENTRY_STRLEN = 16,
 RDMA_NLDEV_ATTR_CHARDEV_TYPE_SIZE = 32,
};

struct rdma_nl_cbs {
 int (*doit)(struct sk_buff *skb, struct nlmsghdr *nlh,
      struct netlink_ext_ack *extack);
 int (*dump)(struct sk_buff *skb, struct netlink_callback *nlcb);
 u8 flags;
};

enum rdma_nl_flags {

 RDMA_NL_ADMIN_PERM = 1 << 0,
};
# 44 "../include/rdma/rdma_netlink.h"
void rdma_nl_register(unsigned int index,
        const struct rdma_nl_cbs cb_table[]);





void rdma_nl_unregister(unsigned int index);
# 63 "../include/rdma/rdma_netlink.h"
void *ibnl_put_msg(struct sk_buff *skb, struct nlmsghdr **nlh, int seq,
     int len, int client, int op, int flags);
# 74 "../include/rdma/rdma_netlink.h"
int ibnl_put_attr(struct sk_buff *skb, struct nlmsghdr *nlh,
    int len, void *data, int type);
# 84 "../include/rdma/rdma_netlink.h"
int rdma_nl_unicast(struct net *net, struct sk_buff *skb, u32 pid);
# 93 "../include/rdma/rdma_netlink.h"
int rdma_nl_unicast_wait(struct net *net, struct sk_buff *skb, __u32 pid);
# 103 "../include/rdma/rdma_netlink.h"
int rdma_nl_multicast(struct net *net, struct sk_buff *skb,
        unsigned int group, gfp_t flags);






bool rdma_nl_chk_listeners(unsigned int group);

struct rdma_link_ops {
 struct list_head list;
 const char *type;
 int (*newlink)(const char *ibdev_name, struct net_device *ndev);
};

void rdma_link_register(struct rdma_link_ops *ops);
void rdma_link_unregister(struct rdma_link_ops *ops);
# 14 "../include/rdma/rdma_counter.h" 2

struct ib_device;
struct ib_qp;

struct auto_mode_param {
 int qp_type;
};

struct rdma_counter_mode {
 enum rdma_nl_counter_mode mode;
 enum rdma_nl_counter_mask mask;
 struct auto_mode_param param;
};

struct rdma_port_counter {
 struct rdma_counter_mode mode;
 struct rdma_hw_stats *hstats;
 unsigned int num_counters;
 struct mutex lock;
};

struct rdma_counter {
 struct rdma_restrack_entry res;
 struct ib_device *device;
 uint32_t id;
 struct kref kref;
 struct rdma_counter_mode mode;
 struct mutex lock;
 struct rdma_hw_stats *stats;
 u32 port;
};

void rdma_counter_init(struct ib_device *dev);
void rdma_counter_release(struct ib_device *dev);
int rdma_counter_set_auto_mode(struct ib_device *dev, u32 port,
          enum rdma_nl_counter_mask mask,
          struct netlink_ext_ack *extack);
int rdma_counter_bind_qp_auto(struct ib_qp *qp, u32 port);
int rdma_counter_unbind_qp(struct ib_qp *qp, bool force);

int rdma_counter_query_stats(struct rdma_counter *counter);
u64 rdma_counter_get_hwstat_value(struct ib_device *dev, u32 port, u32 index);
int rdma_counter_bind_qpn(struct ib_device *dev, u32 port,
     u32 qp_num, u32 counter_id);
int rdma_counter_bind_qpn_alloc(struct ib_device *dev, u32 port,
    u32 qp_num, u32 *counter_id);
int rdma_counter_unbind_qpn(struct ib_device *dev, u32 port,
       u32 qp_num, u32 counter_id);
int rdma_counter_get_mode(struct ib_device *dev, u32 port,
     enum rdma_nl_counter_mode *mode,
     enum rdma_nl_counter_mask *mask);

int rdma_counter_modify(struct ib_device *dev, u32 port,
   unsigned int index, bool enable);
# 41 "../include/rdma/ib_verbs.h" 2

# 1 "../include/rdma/signature.h" 1
# 11 "../include/rdma/signature.h"
enum ib_signature_prot_cap {
 IB_PROT_T10DIF_TYPE_1 = 1,
 IB_PROT_T10DIF_TYPE_2 = 1 << 1,
 IB_PROT_T10DIF_TYPE_3 = 1 << 2,
};

enum ib_signature_guard_cap {
 IB_GUARD_T10DIF_CRC = 1,
 IB_GUARD_T10DIF_CSUM = 1 << 1,
};






enum ib_signature_type {
 IB_SIG_TYPE_NONE,
 IB_SIG_TYPE_T10_DIF,
};






enum ib_t10_dif_bg_type {
 IB_T10DIF_CRC,
 IB_T10DIF_CSUM,
};
# 55 "../include/rdma/signature.h"
struct ib_t10_dif_domain {
 enum ib_t10_dif_bg_type bg_type;
 u16 pi_interval;
 u16 bg;
 u16 app_tag;
 u32 ref_tag;
 bool ref_remap;
 bool app_escape;
 bool ref_escape;
 u16 apptag_check_mask;
};







struct ib_sig_domain {
 enum ib_signature_type sig_type;
 union {
  struct ib_t10_dif_domain dif;
 } sig;
};
# 87 "../include/rdma/signature.h"
struct ib_sig_attrs {
 u8 check_mask;
 struct ib_sig_domain mem;
 struct ib_sig_domain wire;
 int meta_length;
};

enum ib_sig_err_type {
 IB_SIG_BAD_GUARD,
 IB_SIG_BAD_REFTAG,
 IB_SIG_BAD_APPTAG,
};
# 107 "../include/rdma/signature.h"
enum {
 IB_SIG_CHECK_GUARD = 0xc0,
 IB_SIG_CHECK_APPTAG = 0x30,
 IB_SIG_CHECK_REFTAG = 0x0f,
};




struct ib_sig_err {
 enum ib_sig_err_type err_type;
 u32 expected;
 u32 actual;
 u64 sig_err_offset;
 u32 key;
};
# 43 "../include/rdma/ib_verbs.h" 2
# 1 "../include/uapi/rdma/rdma_user_ioctl.h" 1
# 37 "../include/uapi/rdma/rdma_user_ioctl.h"
# 1 "../include/uapi/rdma/ib_user_mad.h" 1
# 39 "../include/uapi/rdma/ib_user_mad.h"
# 1 "../include/uapi/rdma/rdma_user_ioctl.h" 1
# 40 "../include/uapi/rdma/ib_user_mad.h" 2
# 73 "../include/uapi/rdma/ib_user_mad.h"
struct ib_user_mad_hdr_old {
 __u32 id;
 __u32 status;
 __u32 timeout_ms;
 __u32 retries;
 __u32 length;
 __be32 qpn;
 __be32 qkey;
 __be16 lid;
 __u8 sl;
 __u8 path_bits;
 __u8 grh_present;
 __u8 gid_index;
 __u8 hop_limit;
 __u8 traffic_class;
 __u8 gid[16];
 __be32 flow_label;
};
# 117 "../include/uapi/rdma/ib_user_mad.h"
struct ib_user_mad_hdr {
 __u32 id;
 __u32 status;
 __u32 timeout_ms;
 __u32 retries;
 __u32 length;
 __be32 qpn;
 __be32 qkey;
 __be16 lid;
 __u8 sl;
 __u8 path_bits;
 __u8 grh_present;
 __u8 gid_index;
 __u8 hop_limit;
 __u8 traffic_class;
 __u8 gid[16];
 __be32 flow_label;
 __u16 pkey_index;
 __u8 reserved[6];
};







struct ib_user_mad {
 struct ib_user_mad_hdr hdr;
 __u64 __attribute__((aligned(8))) data[];
};
# 166 "../include/uapi/rdma/ib_user_mad.h"
typedef unsigned long __attribute__((aligned(4))) packed_ulong;
# 185 "../include/uapi/rdma/ib_user_mad.h"
struct ib_user_mad_reg_req {
 __u32 id;
 packed_ulong method_mask[(128 / (8 * sizeof (long)))];
 __u8 qpn;
 __u8 mgmt_class;
 __u8 mgmt_class_version;
 __u8 oui[3];
 __u8 rmpp_version;
};
# 217 "../include/uapi/rdma/ib_user_mad.h"
enum {
 IB_USER_MAD_USER_RMPP = (1 << 0),
};

struct ib_user_mad_reg_req2 {
 __u32 id;
 __u32 qpn;
 __u8 mgmt_class;
 __u8 mgmt_class_version;
 __u16 res;
 __u32 flags;
 __u64 __attribute__((aligned(8))) method_mask[2];
 __u32 oui;
 __u8 rmpp_version;
 __u8 reserved[3];
};
# 38 "../include/uapi/rdma/rdma_user_ioctl.h" 2
# 1 "../include/uapi/rdma/hfi/hfi1_ioctl.h" 1
# 62 "../include/uapi/rdma/hfi/hfi1_ioctl.h"
struct hfi1_user_info {




 __u32 userversion;
 __u32 pad;






 __u16 subctxt_cnt;
 __u16 subctxt_id;

 __u8 uuid[16];
};

struct hfi1_ctxt_info {
 __u64 __attribute__((aligned(8))) runtime_flags;
 __u32 rcvegr_size;
 __u16 num_active;
 __u16 unit;
 __u16 ctxt;
 __u16 subctxt;
 __u16 rcvtids;
 __u16 credits;
 __u16 numa_node;
 __u16 rec_cpu;
 __u16 send_ctxt;
 __u16 egrtids;
 __u16 rcvhdrq_cnt;
 __u16 rcvhdrq_entsize;
 __u16 sdma_ring_size;
};

struct hfi1_tid_info {

 __u64 __attribute__((aligned(8))) vaddr;

 __u64 __attribute__((aligned(8))) tidlist;

 __u32 tidcnt;

 __u32 length;
};
# 120 "../include/uapi/rdma/hfi/hfi1_ioctl.h"
struct hfi1_base_info {

 __u32 hw_version;

 __u32 sw_version;

 __u16 jkey;
 __u16 padding1;




 __u32 bthqp;

 __u64 __attribute__((aligned(8))) sc_credits_addr;




 __u64 __attribute__((aligned(8))) pio_bufbase_sop;




 __u64 __attribute__((aligned(8))) pio_bufbase;

 __u64 __attribute__((aligned(8))) rcvhdr_bufbase;

 __u64 __attribute__((aligned(8))) rcvegr_bufbase;

 __u64 __attribute__((aligned(8))) sdma_comp_bufbase;







 __u64 __attribute__((aligned(8))) user_regbase;

 __u64 __attribute__((aligned(8))) events_bufbase;

 __u64 __attribute__((aligned(8))) status_bufbase;

 __u64 __attribute__((aligned(8))) rcvhdrtail_base;





 __u64 __attribute__((aligned(8))) subctxt_uregbase;
 __u64 __attribute__((aligned(8))) subctxt_rcvegrbuf;
 __u64 __attribute__((aligned(8))) subctxt_rcvhdrbuf;
};
# 39 "../include/uapi/rdma/rdma_user_ioctl.h" 2
# 1 "../include/uapi/rdma/rdma_user_ioctl_cmds.h" 1
# 44 "../include/uapi/rdma/rdma_user_ioctl_cmds.h"
enum {

 UVERBS_ATTR_F_MANDATORY = 1U << 0,




 UVERBS_ATTR_F_VALID_OUTPUT = 1U << 1,
};

struct ib_uverbs_attr {
 __u16 attr_id;
 __u16 len;
 __u16 flags;
 union {
  struct {
   __u8 elem_id;
   __u8 reserved;
  } enum_data;
  __u16 reserved;
 } attr_data;
 union {




  __u64 __attribute__((aligned(8))) data;

  __s64 data_s64;
 };
};

struct ib_uverbs_ioctl_hdr {
 __u16 length;
 __u16 object_id;
 __u16 method_id;
 __u16 num_attrs;
 __u64 __attribute__((aligned(8))) reserved1;
 __u32 driver_id;
 __u32 reserved2;
 struct ib_uverbs_attr attrs[];
};
# 40 "../include/uapi/rdma/rdma_user_ioctl.h" 2
# 44 "../include/rdma/ib_verbs.h" 2
# 1 "../include/uapi/rdma/ib_user_ioctl_verbs.h" 1
# 47 "../include/uapi/rdma/ib_user_ioctl_verbs.h"
enum ib_uverbs_core_support {
 IB_UVERBS_CORE_SUPPORT_OPTIONAL_MR_ACCESS = 1 << 0,
};

enum ib_uverbs_access_flags {
 IB_UVERBS_ACCESS_LOCAL_WRITE = 1 << 0,
 IB_UVERBS_ACCESS_REMOTE_WRITE = 1 << 1,
 IB_UVERBS_ACCESS_REMOTE_READ = 1 << 2,
 IB_UVERBS_ACCESS_REMOTE_ATOMIC = 1 << 3,
 IB_UVERBS_ACCESS_MW_BIND = 1 << 4,
 IB_UVERBS_ACCESS_ZERO_BASED = 1 << 5,
 IB_UVERBS_ACCESS_ON_DEMAND = 1 << 6,
 IB_UVERBS_ACCESS_HUGETLB = 1 << 7,
 IB_UVERBS_ACCESS_FLUSH_GLOBAL = 1 << 8,
 IB_UVERBS_ACCESS_FLUSH_PERSISTENT = 1 << 9,

 IB_UVERBS_ACCESS_RELAXED_ORDERING = (1 << 20),
 IB_UVERBS_ACCESS_OPTIONAL_RANGE =
  (((1 << 29) << 1) - 1) &
  ~((1 << 20) - 1)
};

enum ib_uverbs_srq_type {
 IB_UVERBS_SRQT_BASIC,
 IB_UVERBS_SRQT_XRC,
 IB_UVERBS_SRQT_TM,
};

enum ib_uverbs_wq_type {
 IB_UVERBS_WQT_RQ,
};

enum ib_uverbs_wq_flags {
 IB_UVERBS_WQ_FLAGS_CVLAN_STRIPPING = 1 << 0,
 IB_UVERBS_WQ_FLAGS_SCATTER_FCS = 1 << 1,
 IB_UVERBS_WQ_FLAGS_DELAY_DROP = 1 << 2,
 IB_UVERBS_WQ_FLAGS_PCI_WRITE_END_PADDING = 1 << 3,
};

enum ib_uverbs_qp_type {
 IB_UVERBS_QPT_RC = 2,
 IB_UVERBS_QPT_UC,
 IB_UVERBS_QPT_UD,
 IB_UVERBS_QPT_RAW_PACKET = 8,
 IB_UVERBS_QPT_XRC_INI,
 IB_UVERBS_QPT_XRC_TGT,
 IB_UVERBS_QPT_DRIVER = 0xFF,
};

enum ib_uverbs_qp_create_flags {
 IB_UVERBS_QP_CREATE_BLOCK_MULTICAST_LOOPBACK = 1 << 1,
 IB_UVERBS_QP_CREATE_SCATTER_FCS = 1 << 8,
 IB_UVERBS_QP_CREATE_CVLAN_STRIPPING = 1 << 9,
 IB_UVERBS_QP_CREATE_PCI_WRITE_END_PADDING = 1 << 11,
 IB_UVERBS_QP_CREATE_SQ_SIG_ALL = 1 << 12,
};

enum ib_uverbs_query_port_cap_flags {
 IB_UVERBS_PCF_SM = 1 << 1,
 IB_UVERBS_PCF_NOTICE_SUP = 1 << 2,
 IB_UVERBS_PCF_TRAP_SUP = 1 << 3,
 IB_UVERBS_PCF_OPT_IPD_SUP = 1 << 4,
 IB_UVERBS_PCF_AUTO_MIGR_SUP = 1 << 5,
 IB_UVERBS_PCF_SL_MAP_SUP = 1 << 6,
 IB_UVERBS_PCF_MKEY_NVRAM = 1 << 7,
 IB_UVERBS_PCF_PKEY_NVRAM = 1 << 8,
 IB_UVERBS_PCF_LED_INFO_SUP = 1 << 9,
 IB_UVERBS_PCF_SM_DISABLED = 1 << 10,
 IB_UVERBS_PCF_SYS_IMAGE_GUID_SUP = 1 << 11,
 IB_UVERBS_PCF_PKEY_SW_EXT_PORT_TRAP_SUP = 1 << 12,
 IB_UVERBS_PCF_EXTENDED_SPEEDS_SUP = 1 << 14,
 IB_UVERBS_PCF_CM_SUP = 1 << 16,
 IB_UVERBS_PCF_SNMP_TUNNEL_SUP = 1 << 17,
 IB_UVERBS_PCF_REINIT_SUP = 1 << 18,
 IB_UVERBS_PCF_DEVICE_MGMT_SUP = 1 << 19,
 IB_UVERBS_PCF_VENDOR_CLASS_SUP = 1 << 20,
 IB_UVERBS_PCF_DR_NOTICE_SUP = 1 << 21,
 IB_UVERBS_PCF_CAP_MASK_NOTICE_SUP = 1 << 22,
 IB_UVERBS_PCF_BOOT_MGMT_SUP = 1 << 23,
 IB_UVERBS_PCF_LINK_LATENCY_SUP = 1 << 24,
 IB_UVERBS_PCF_CLIENT_REG_SUP = 1 << 25,




 IB_UVERBS_PCF_LINK_SPEED_WIDTH_TABLE_SUP = 1 << 27,
 IB_UVERBS_PCF_VENDOR_SPECIFIC_MADS_TABLE_SUP = 1 << 28,
 IB_UVERBS_PCF_MCAST_PKEY_TRAP_SUPPRESSION_SUP = 1 << 29,
 IB_UVERBS_PCF_MCAST_FDB_TOP_SUP = 1 << 30,
 IB_UVERBS_PCF_HIERARCHY_INFO_SUP = 1ULL << 31,


 IB_UVERBS_PCF_IP_BASED_GIDS = 1 << 26,
};

enum ib_uverbs_query_port_flags {
 IB_UVERBS_QPF_GRH_REQUIRED = 1 << 0,
};

enum ib_uverbs_flow_action_esp_keymat {
 IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM,
};

enum ib_uverbs_flow_action_esp_keymat_aes_gcm_iv_algo {
 IB_UVERBS_FLOW_ACTION_IV_ALGO_SEQ,
};

struct ib_uverbs_flow_action_esp_keymat_aes_gcm {
 __u64 __attribute__((aligned(8))) iv;
 __u32 iv_algo;

 __u32 salt;
 __u32 icv_len;

 __u32 key_len;
 __u32 aes_key[256 / 32];
};

enum ib_uverbs_flow_action_esp_replay {
 IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE,
 IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP,
};

struct ib_uverbs_flow_action_esp_replay_bmp {
 __u32 size;
};

enum ib_uverbs_flow_action_esp_flags {
 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_INLINE_CRYPTO = 0UL << 0,
 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_FULL_OFFLOAD = 1UL << 0,

 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_TUNNEL = 0UL << 1,
 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_TRANSPORT = 1UL << 1,

 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_DECRYPT = 0UL << 2,
 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_ENCRYPT = 1UL << 2,

 IB_UVERBS_FLOW_ACTION_ESP_FLAGS_ESN_NEW_WINDOW = 1UL << 3,
};

struct ib_uverbs_flow_action_esp_encap {



 __u64 __attribute__((aligned(8))) val_ptr;
 __u64 __attribute__((aligned(8))) next_ptr;
 __u16 len;
 __u16 type;
};

struct ib_uverbs_flow_action_esp {
 __u32 spi;
 __u32 seq;
 __u32 tfc_pad;
 __u32 flags;
 __u64 __attribute__((aligned(8))) hard_limit_pkts;
};

enum ib_uverbs_read_counters_flags {

 IB_UVERBS_READ_COUNTERS_PREFER_CACHED = 1 << 0,
};

enum ib_uverbs_advise_mr_advice {
 IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH,
 IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_WRITE,
 IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_NO_FAULT,
};

enum ib_uverbs_advise_mr_flag {
 IB_UVERBS_ADVISE_MR_FLAG_FLUSH = 1 << 0,
};

struct ib_uverbs_query_port_resp_ex {
 struct ib_uverbs_query_port_resp legacy_resp;
 __u16 port_cap_flags2;
 __u8 reserved[2];
 __u32 active_speed_ex;
};

struct ib_uverbs_qp_cap {
 __u32 max_send_wr;
 __u32 max_recv_wr;
 __u32 max_send_sge;
 __u32 max_recv_sge;
 __u32 max_inline_data;
};

enum rdma_driver_id {
 RDMA_DRIVER_UNKNOWN,
 RDMA_DRIVER_MLX5,
 RDMA_DRIVER_MLX4,
 RDMA_DRIVER_CXGB3,
 RDMA_DRIVER_CXGB4,
 RDMA_DRIVER_MTHCA,
 RDMA_DRIVER_BNXT_RE,
 RDMA_DRIVER_OCRDMA,
 RDMA_DRIVER_NES,
 RDMA_DRIVER_I40IW,
 RDMA_DRIVER_IRDMA = RDMA_DRIVER_I40IW,
 RDMA_DRIVER_VMW_PVRDMA,
 RDMA_DRIVER_QEDR,
 RDMA_DRIVER_HNS,
 RDMA_DRIVER_USNIC,
 RDMA_DRIVER_RXE,
 RDMA_DRIVER_HFI1,
 RDMA_DRIVER_QIB,
 RDMA_DRIVER_EFA,
 RDMA_DRIVER_SIW,
 RDMA_DRIVER_ERDMA,
 RDMA_DRIVER_MANA,
};

enum ib_uverbs_gid_type {
 IB_UVERBS_GID_TYPE_IB,
 IB_UVERBS_GID_TYPE_ROCE_V1,
 IB_UVERBS_GID_TYPE_ROCE_V2,
};

struct ib_uverbs_gid_entry {
 __u64 __attribute__((aligned(8))) gid[2];
 __u32 gid_index;
 __u32 port_num;
 __u32 gid_type;
 __u32 netdev_ifindex;
};
# 45 "../include/rdma/ib_verbs.h" 2



struct ib_umem_odp;
struct ib_uqp_object;
struct ib_usrq_object;
struct ib_uwq_object;
struct rdma_cm_id;
struct ib_port;
struct hw_stats_device_data;

extern struct workqueue_struct *ib_wq;
extern struct workqueue_struct *ib_comp_wq;
extern struct workqueue_struct *ib_comp_unbound_wq;

struct ib_ucq_object;

__attribute__((__format__(printf, 3, 4))) __attribute__((__cold__))
void ibdev_printk(const char *level, const struct ib_device *ibdev,
    const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_emerg(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_alert(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_crit(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_err(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_warn(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_notice(const struct ib_device *ibdev, const char *format, ...);
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
void ibdev_info(const struct ib_device *ibdev, const char *format, ...);






__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void ibdev_dbg(const struct ib_device *ibdev, const char *format, ...) {}
# 128 "../include/rdma/ib_verbs.h"
__attribute__((__format__(printf, 2, 3))) __attribute__((__cold__))
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
void ibdev_dbg_ratelimited(const struct ib_device *ibdev, const char *format, ...) {}


union ib_gid {
 u8 raw[16];
 struct {
  __be64 subnet_prefix;
  __be64 interface_id;
 } global;
};

extern union ib_gid zgid;

enum ib_gid_type {
 IB_GID_TYPE_IB = IB_UVERBS_GID_TYPE_IB,
 IB_GID_TYPE_ROCE = IB_UVERBS_GID_TYPE_ROCE_V1,
 IB_GID_TYPE_ROCE_UDP_ENCAP = IB_UVERBS_GID_TYPE_ROCE_V2,
 IB_GID_TYPE_SIZE
};


struct ib_gid_attr {
 struct net_device *ndev;
 struct ib_device *device;
 union ib_gid gid;
 enum ib_gid_type gid_type;
 u16 index;
 u32 port_num;
};

enum {

 IB_SA_WELL_KNOWN_GUID = ((((1ULL))) << (57)) | 2,
};

enum rdma_transport_type {
 RDMA_TRANSPORT_IB,
 RDMA_TRANSPORT_IWARP,
 RDMA_TRANSPORT_USNIC,
 RDMA_TRANSPORT_USNIC_UDP,
 RDMA_TRANSPORT_UNSPECIFIED,
};

enum rdma_protocol_type {
 RDMA_PROTOCOL_IB,
 RDMA_PROTOCOL_IBOE,
 RDMA_PROTOCOL_IWARP,
 RDMA_PROTOCOL_USNIC_UDP
};

__attribute__((__const__)) enum rdma_transport_type
rdma_node_get_transport(unsigned int node_type);

enum rdma_network_type {
 RDMA_NETWORK_IB,
 RDMA_NETWORK_ROCE_V1,
 RDMA_NETWORK_IPV4,
 RDMA_NETWORK_IPV6
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum ib_gid_type ib_network_to_gid_type(enum rdma_network_type network_type)
{
 if (network_type == RDMA_NETWORK_IPV4 ||
     network_type == RDMA_NETWORK_IPV6)
  return IB_GID_TYPE_ROCE_UDP_ENCAP;
 else if (network_type == RDMA_NETWORK_ROCE_V1)
  return IB_GID_TYPE_ROCE;
 else
  return IB_GID_TYPE_IB;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum rdma_network_type
rdma_gid_attr_network_type(const struct ib_gid_attr *attr)
{
 if (attr->gid_type == IB_GID_TYPE_IB)
  return RDMA_NETWORK_IB;

 if (attr->gid_type == IB_GID_TYPE_ROCE)
  return RDMA_NETWORK_ROCE_V1;

 if (ipv6_addr_v4mapped((struct in6_addr *)&attr->gid))
  return RDMA_NETWORK_IPV4;
 else
  return RDMA_NETWORK_IPV6;
}

enum rdma_link_layer {
 IB_LINK_LAYER_UNSPECIFIED,
 IB_LINK_LAYER_INFINIBAND,
 IB_LINK_LAYER_ETHERNET,
};

enum ib_device_cap_flags {
 IB_DEVICE_RESIZE_MAX_WR = IB_UVERBS_DEVICE_RESIZE_MAX_WR,
 IB_DEVICE_BAD_PKEY_CNTR = IB_UVERBS_DEVICE_BAD_PKEY_CNTR,
 IB_DEVICE_BAD_QKEY_CNTR = IB_UVERBS_DEVICE_BAD_QKEY_CNTR,
 IB_DEVICE_RAW_MULTI = IB_UVERBS_DEVICE_RAW_MULTI,
 IB_DEVICE_AUTO_PATH_MIG = IB_UVERBS_DEVICE_AUTO_PATH_MIG,
 IB_DEVICE_CHANGE_PHY_PORT = IB_UVERBS_DEVICE_CHANGE_PHY_PORT,
 IB_DEVICE_UD_AV_PORT_ENFORCE = IB_UVERBS_DEVICE_UD_AV_PORT_ENFORCE,
 IB_DEVICE_CURR_QP_STATE_MOD = IB_UVERBS_DEVICE_CURR_QP_STATE_MOD,
 IB_DEVICE_SHUTDOWN_PORT = IB_UVERBS_DEVICE_SHUTDOWN_PORT,

 IB_DEVICE_PORT_ACTIVE_EVENT = IB_UVERBS_DEVICE_PORT_ACTIVE_EVENT,
 IB_DEVICE_SYS_IMAGE_GUID = IB_UVERBS_DEVICE_SYS_IMAGE_GUID,
 IB_DEVICE_RC_RNR_NAK_GEN = IB_UVERBS_DEVICE_RC_RNR_NAK_GEN,
 IB_DEVICE_SRQ_RESIZE = IB_UVERBS_DEVICE_SRQ_RESIZE,
 IB_DEVICE_N_NOTIFY_CQ = IB_UVERBS_DEVICE_N_NOTIFY_CQ,


 IB_DEVICE_MEM_WINDOW = IB_UVERBS_DEVICE_MEM_WINDOW,







 IB_DEVICE_UD_IP_CSUM = IB_UVERBS_DEVICE_UD_IP_CSUM,
 IB_DEVICE_XRC = IB_UVERBS_DEVICE_XRC,
# 260 "../include/rdma/ib_verbs.h"
 IB_DEVICE_MEM_MGT_EXTENSIONS = IB_UVERBS_DEVICE_MEM_MGT_EXTENSIONS,
 IB_DEVICE_MEM_WINDOW_TYPE_2A = IB_UVERBS_DEVICE_MEM_WINDOW_TYPE_2A,
 IB_DEVICE_MEM_WINDOW_TYPE_2B = IB_UVERBS_DEVICE_MEM_WINDOW_TYPE_2B,
 IB_DEVICE_RC_IP_CSUM = IB_UVERBS_DEVICE_RC_IP_CSUM,

 IB_DEVICE_RAW_IP_CSUM = IB_UVERBS_DEVICE_RAW_IP_CSUM,
 IB_DEVICE_MANAGED_FLOW_STEERING =
  IB_UVERBS_DEVICE_MANAGED_FLOW_STEERING,

 IB_DEVICE_RAW_SCATTER_FCS = IB_UVERBS_DEVICE_RAW_SCATTER_FCS,

 IB_DEVICE_PCI_WRITE_END_PADDING =
  IB_UVERBS_DEVICE_PCI_WRITE_END_PADDING,

 IB_DEVICE_FLUSH_GLOBAL = IB_UVERBS_DEVICE_FLUSH_GLOBAL,
 IB_DEVICE_FLUSH_PERSISTENT = IB_UVERBS_DEVICE_FLUSH_PERSISTENT,
 IB_DEVICE_ATOMIC_WRITE = IB_UVERBS_DEVICE_ATOMIC_WRITE,
};

enum ib_kernel_cap_flags {







 IBK_LOCAL_DMA_LKEY = 1 << 0,

 IBK_INTEGRITY_HANDOVER = 1 << 1,

 IBK_ON_DEMAND_PAGING = 1 << 2,

 IBK_SG_GAPS_REG = 1 << 3,

 IBK_ALLOW_USER_UNREG = 1 << 4,


 IBK_BLOCK_MULTICAST_LOOPBACK = 1 << 5,

 IBK_UD_TSO = 1 << 6,







 IBK_VIRTUAL_FUNCTION = 1 << 7,

 IBK_RDMA_NETDEV_OPA = 1 << 8,
};

enum ib_atomic_cap {
 IB_ATOMIC_NONE,
 IB_ATOMIC_HCA,
 IB_ATOMIC_GLOB
};

enum ib_odp_general_cap_bits {
 IB_ODP_SUPPORT = 1 << 0,
 IB_ODP_SUPPORT_IMPLICIT = 1 << 1,
};

enum ib_odp_transport_cap_bits {
 IB_ODP_SUPPORT_SEND = 1 << 0,
 IB_ODP_SUPPORT_RECV = 1 << 1,
 IB_ODP_SUPPORT_WRITE = 1 << 2,
 IB_ODP_SUPPORT_READ = 1 << 3,
 IB_ODP_SUPPORT_ATOMIC = 1 << 4,
 IB_ODP_SUPPORT_SRQ_RECV = 1 << 5,
};

struct ib_odp_caps {
 uint64_t general_caps;
 struct {
  uint32_t rc_odp_caps;
  uint32_t uc_odp_caps;
  uint32_t ud_odp_caps;
  uint32_t xrc_odp_caps;
 } per_transport_caps;
};

struct ib_rss_caps {




 u32 supported_qpts;
 u32 max_rwq_indirection_tables;
 u32 max_rwq_indirection_table_size;
};

enum ib_tm_cap_flags {

 IB_TM_CAP_RNDV_RC = 1 << 0,
};

struct ib_tm_caps {

 u32 max_rndv_hdr_size;

 u32 max_num_tags;

 u32 flags;

 u32 max_ops;

 u32 max_sge;
};

struct ib_cq_init_attr {
 unsigned int cqe;
 u32 comp_vector;
 u32 flags;
};

enum ib_cq_attr_mask {
 IB_CQ_MODERATE = 1 << 0,
};

struct ib_cq_caps {
 u16 max_cq_moderation_count;
 u16 max_cq_moderation_period;
};

struct ib_dm_mr_attr {
 u64 length;
 u64 offset;
 u32 access_flags;
};

struct ib_dm_alloc_attr {
 u64 length;
 u32 alignment;
 u32 flags;
};

struct ib_device_attr {
 u64 fw_ver;
 __be64 sys_image_guid;
 u64 max_mr_size;
 u64 page_size_cap;
 u32 vendor_id;
 u32 vendor_part_id;
 u32 hw_ver;
 int max_qp;
 int max_qp_wr;
 u64 device_cap_flags;
 u64 kernel_cap_flags;
 int max_send_sge;
 int max_recv_sge;
 int max_sge_rd;
 int max_cq;
 int max_cqe;
 int max_mr;
 int max_pd;
 int max_qp_rd_atom;
 int max_ee_rd_atom;
 int max_res_rd_atom;
 int max_qp_init_rd_atom;
 int max_ee_init_rd_atom;
 enum ib_atomic_cap atomic_cap;
 enum ib_atomic_cap masked_atomic_cap;
 int max_ee;
 int max_rdd;
 int max_mw;
 int max_raw_ipv6_qp;
 int max_raw_ethy_qp;
 int max_mcast_grp;
 int max_mcast_qp_attach;
 int max_total_mcast_qp_attach;
 int max_ah;
 int max_srq;
 int max_srq_wr;
 int max_srq_sge;
 unsigned int max_fast_reg_page_list_len;
 unsigned int max_pi_fast_reg_page_list_len;
 u16 max_pkeys;
 u8 local_ca_ack_delay;
 int sig_prot_cap;
 int sig_guard_cap;
 struct ib_odp_caps odp_caps;
 uint64_t timestamp_mask;
 uint64_t hca_core_clock;
 struct ib_rss_caps rss_caps;
 u32 max_wq_type_rq;
 u32 raw_packet_caps;
 struct ib_tm_caps tm_caps;
 struct ib_cq_caps cq_caps;
 u64 max_dm_size;

 u32 max_sgl_rd;
};

enum ib_mtu {
 IB_MTU_256 = 1,
 IB_MTU_512 = 2,
 IB_MTU_1024 = 3,
 IB_MTU_2048 = 4,
 IB_MTU_4096 = 5
};

enum opa_mtu {
 OPA_MTU_8192 = 6,
 OPA_MTU_10240 = 7
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_mtu_enum_to_int(enum ib_mtu mtu)
{
 switch (mtu) {
 case IB_MTU_256: return 256;
 case IB_MTU_512: return 512;
 case IB_MTU_1024: return 1024;
 case IB_MTU_2048: return 2048;
 case IB_MTU_4096: return 4096;
 default: return -1;
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum ib_mtu ib_mtu_int_to_enum(int mtu)
{
 if (mtu >= 4096)
  return IB_MTU_4096;
 else if (mtu >= 2048)
  return IB_MTU_2048;
 else if (mtu >= 1024)
  return IB_MTU_1024;
 else if (mtu >= 512)
  return IB_MTU_512;
 else
  return IB_MTU_256;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int opa_mtu_enum_to_int(enum opa_mtu mtu)
{
 switch (mtu) {
 case OPA_MTU_8192:
  return 8192;
 case OPA_MTU_10240:
  return 10240;
 default:
  return(ib_mtu_enum_to_int((enum ib_mtu)mtu));
 }
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum opa_mtu opa_mtu_int_to_enum(int mtu)
{
 if (mtu >= 10240)
  return OPA_MTU_10240;
 else if (mtu >= 8192)
  return OPA_MTU_8192;
 else
  return ((enum opa_mtu)ib_mtu_int_to_enum(mtu));
}

enum ib_port_state {
 IB_PORT_NOP = 0,
 IB_PORT_DOWN = 1,
 IB_PORT_INIT = 2,
 IB_PORT_ARMED = 3,
 IB_PORT_ACTIVE = 4,
 IB_PORT_ACTIVE_DEFER = 5
};

enum ib_port_phys_state {
 IB_PORT_PHYS_STATE_SLEEP = 1,
 IB_PORT_PHYS_STATE_POLLING = 2,
 IB_PORT_PHYS_STATE_DISABLED = 3,
 IB_PORT_PHYS_STATE_PORT_CONFIGURATION_TRAINING = 4,
 IB_PORT_PHYS_STATE_LINK_UP = 5,
 IB_PORT_PHYS_STATE_LINK_ERROR_RECOVERY = 6,
 IB_PORT_PHYS_STATE_PHY_TEST = 7,
};

enum ib_port_width {
 IB_WIDTH_1X = 1,
 IB_WIDTH_2X = 16,
 IB_WIDTH_4X = 2,
 IB_WIDTH_8X = 4,
 IB_WIDTH_12X = 8
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_width_enum_to_int(enum ib_port_width width)
{
 switch (width) {
 case IB_WIDTH_1X: return 1;
 case IB_WIDTH_2X: return 2;
 case IB_WIDTH_4X: return 4;
 case IB_WIDTH_8X: return 8;
 case IB_WIDTH_12X: return 12;
 default: return -1;
 }
}

enum ib_port_speed {
 IB_SPEED_SDR = 1,
 IB_SPEED_DDR = 2,
 IB_SPEED_QDR = 4,
 IB_SPEED_FDR10 = 8,
 IB_SPEED_FDR = 16,
 IB_SPEED_EDR = 32,
 IB_SPEED_HDR = 64,
 IB_SPEED_NDR = 128,
 IB_SPEED_XDR = 256,
};

enum ib_stat_flag {
 IB_STAT_FLAG_OPTIONAL = 1 << 0,
};







struct rdma_stat_desc {
 const char *name;
 unsigned int flags;
 const void *priv;
};
# 604 "../include/rdma/ib_verbs.h"
struct rdma_hw_stats {
 struct mutex lock;
 unsigned long timestamp;
 unsigned long lifespan;
 const struct rdma_stat_desc *descs;
 unsigned long *is_disabled;
 int num_counters;
 u64 value[] __attribute__((__counted_by__(num_counters)));
};



struct rdma_hw_stats *rdma_alloc_hw_stats_struct(
 const struct rdma_stat_desc *descs, int num_counters,
 unsigned long lifespan);

void rdma_free_hw_stats_struct(struct rdma_hw_stats *stats);
# 677 "../include/rdma/ib_verbs.h"
struct ib_port_attr {
 u64 subnet_prefix;
 enum ib_port_state state;
 enum ib_mtu max_mtu;
 enum ib_mtu active_mtu;
 u32 phys_mtu;
 int gid_tbl_len;
 unsigned int ip_gids:1;

 u32 port_cap_flags;
 u32 max_msg_sz;
 u32 bad_pkey_cntr;
 u32 qkey_viol_cntr;
 u16 pkey_tbl_len;
 u32 sm_lid;
 u32 lid;
 u8 lmc;
 u8 max_vl_num;
 u8 sm_sl;
 u8 subnet_timeout;
 u8 init_type_reply;
 u8 active_width;
 u16 active_speed;
 u8 phys_state;
 u16 port_cap_flags2;
};

enum ib_device_modify_flags {
 IB_DEVICE_MODIFY_SYS_IMAGE_GUID = 1 << 0,
 IB_DEVICE_MODIFY_NODE_DESC = 1 << 1
};



struct ib_device_modify {
 u64 sys_image_guid;
 char node_desc[64];
};

enum ib_port_modify_flags {
 IB_PORT_SHUTDOWN = 1,
 IB_PORT_INIT_TYPE = (1<<2),
 IB_PORT_RESET_QKEY_CNTR = (1<<3),
 IB_PORT_OPA_MASK_CHG = (1<<4)
};

struct ib_port_modify {
 u32 set_port_cap_mask;
 u32 clr_port_cap_mask;
 u8 init_type;
};

enum ib_event_type {
 IB_EVENT_CQ_ERR,
 IB_EVENT_QP_FATAL,
 IB_EVENT_QP_REQ_ERR,
 IB_EVENT_QP_ACCESS_ERR,
 IB_EVENT_COMM_EST,
 IB_EVENT_SQ_DRAINED,
 IB_EVENT_PATH_MIG,
 IB_EVENT_PATH_MIG_ERR,
 IB_EVENT_DEVICE_FATAL,
 IB_EVENT_PORT_ACTIVE,
 IB_EVENT_PORT_ERR,
 IB_EVENT_LID_CHANGE,
 IB_EVENT_PKEY_CHANGE,
 IB_EVENT_SM_CHANGE,
 IB_EVENT_SRQ_ERR,
 IB_EVENT_SRQ_LIMIT_REACHED,
 IB_EVENT_QP_LAST_WQE_REACHED,
 IB_EVENT_CLIENT_REREGISTER,
 IB_EVENT_GID_CHANGE,
 IB_EVENT_WQ_FATAL,
};

const char *__attribute__((__const__)) ib_event_msg(enum ib_event_type event);

struct ib_event {
 struct ib_device *device;
 union {
  struct ib_cq *cq;
  struct ib_qp *qp;
  struct ib_srq *srq;
  struct ib_wq *wq;
  u32 port_num;
 } element;
 enum ib_event_type event;
};

struct ib_event_handler {
 struct ib_device *device;
 void (*handler)(struct ib_event_handler *, struct ib_event *);
 struct list_head list;
};
# 779 "../include/rdma/ib_verbs.h"
struct ib_global_route {
 const struct ib_gid_attr *sgid_attr;
 union ib_gid dgid;
 u32 flow_label;
 u8 sgid_index;
 u8 hop_limit;
 u8 traffic_class;
};

struct ib_grh {
 __be32 version_tclass_flow;
 __be16 paylen;
 u8 next_hdr;
 u8 hop_limit;
 union ib_gid sgid;
 union ib_gid dgid;
};

union rdma_network_hdr {
 struct ib_grh ibgrh;
 struct {



  u8 reserved[20];
  struct iphdr roce4grh;
 };
};



enum {
 IB_MULTICAST_QPN = 0xffffff
};




enum ib_ah_flags {
 IB_AH_GRH = 1
};

enum ib_rate {
 IB_RATE_PORT_CURRENT = 0,
 IB_RATE_2_5_GBPS = 2,
 IB_RATE_5_GBPS = 5,
 IB_RATE_10_GBPS = 3,
 IB_RATE_20_GBPS = 6,
 IB_RATE_30_GBPS = 4,
 IB_RATE_40_GBPS = 7,
 IB_RATE_60_GBPS = 8,
 IB_RATE_80_GBPS = 9,
 IB_RATE_120_GBPS = 10,
 IB_RATE_14_GBPS = 11,
 IB_RATE_56_GBPS = 12,
 IB_RATE_112_GBPS = 13,
 IB_RATE_168_GBPS = 14,
 IB_RATE_25_GBPS = 15,
 IB_RATE_100_GBPS = 16,
 IB_RATE_200_GBPS = 17,
 IB_RATE_300_GBPS = 18,
 IB_RATE_28_GBPS = 19,
 IB_RATE_50_GBPS = 20,
 IB_RATE_400_GBPS = 21,
 IB_RATE_600_GBPS = 22,
 IB_RATE_800_GBPS = 23,
};







__attribute__((__const__)) int ib_rate_to_mult(enum ib_rate rate);






__attribute__((__const__)) int ib_rate_to_mbps(enum ib_rate rate);
# 880 "../include/rdma/ib_verbs.h"
enum ib_mr_type {
 IB_MR_TYPE_MEM_REG,
 IB_MR_TYPE_SG_GAPS,
 IB_MR_TYPE_DM,
 IB_MR_TYPE_USER,
 IB_MR_TYPE_DMA,
 IB_MR_TYPE_INTEGRITY,
};

enum ib_mr_status_check {
 IB_MR_CHECK_SIG_STATUS = 1,
};
# 901 "../include/rdma/ib_verbs.h"
struct ib_mr_status {
 u32 fail_status;
 struct ib_sig_err sig_err;
};






__attribute__((__const__)) enum ib_rate mult_to_ib_rate(int mult);

struct rdma_ah_init_attr {
 struct rdma_ah_attr *ah_attr;
 u32 flags;
 struct net_device *xmit_slave;
};

enum rdma_ah_attr_type {
 RDMA_AH_ATTR_TYPE_UNDEFINED,
 RDMA_AH_ATTR_TYPE_IB,
 RDMA_AH_ATTR_TYPE_ROCE,
 RDMA_AH_ATTR_TYPE_OPA,
};

struct ib_ah_attr {
 u16 dlid;
 u8 src_path_bits;
};

struct roce_ah_attr {
 u8 dmac[6];
};

struct opa_ah_attr {
 u32 dlid;
 u8 src_path_bits;
 bool make_grd;
};

struct rdma_ah_attr {
 struct ib_global_route grh;
 u8 sl;
 u8 static_rate;
 u32 port_num;
 u8 ah_flags;
 enum rdma_ah_attr_type type;
 union {
  struct ib_ah_attr ib;
  struct roce_ah_attr roce;
  struct opa_ah_attr opa;
 };
};

enum ib_wc_status {
 IB_WC_SUCCESS,
 IB_WC_LOC_LEN_ERR,
 IB_WC_LOC_QP_OP_ERR,
 IB_WC_LOC_EEC_OP_ERR,
 IB_WC_LOC_PROT_ERR,
 IB_WC_WR_FLUSH_ERR,
 IB_WC_MW_BIND_ERR,
 IB_WC_BAD_RESP_ERR,
 IB_WC_LOC_ACCESS_ERR,
 IB_WC_REM_INV_REQ_ERR,
 IB_WC_REM_ACCESS_ERR,
 IB_WC_REM_OP_ERR,
 IB_WC_RETRY_EXC_ERR,
 IB_WC_RNR_RETRY_EXC_ERR,
 IB_WC_LOC_RDD_VIOL_ERR,
 IB_WC_REM_INV_RD_REQ_ERR,
 IB_WC_REM_ABORT_ERR,
 IB_WC_INV_EECN_ERR,
 IB_WC_INV_EEC_STATE_ERR,
 IB_WC_FATAL_ERR,
 IB_WC_RESP_TIMEOUT_ERR,
 IB_WC_GENERAL_ERR
};

const char *__attribute__((__const__)) ib_wc_status_msg(enum ib_wc_status status);

enum ib_wc_opcode {
 IB_WC_SEND = IB_UVERBS_WC_SEND,
 IB_WC_RDMA_WRITE = IB_UVERBS_WC_RDMA_WRITE,
 IB_WC_RDMA_READ = IB_UVERBS_WC_RDMA_READ,
 IB_WC_COMP_SWAP = IB_UVERBS_WC_COMP_SWAP,
 IB_WC_FETCH_ADD = IB_UVERBS_WC_FETCH_ADD,
 IB_WC_BIND_MW = IB_UVERBS_WC_BIND_MW,
 IB_WC_LOCAL_INV = IB_UVERBS_WC_LOCAL_INV,
 IB_WC_LSO = IB_UVERBS_WC_TSO,
 IB_WC_ATOMIC_WRITE = IB_UVERBS_WC_ATOMIC_WRITE,
 IB_WC_REG_MR,
 IB_WC_MASKED_COMP_SWAP,
 IB_WC_MASKED_FETCH_ADD,
 IB_WC_FLUSH = IB_UVERBS_WC_FLUSH,




 IB_WC_RECV = 1 << 7,
 IB_WC_RECV_RDMA_WITH_IMM
};

enum ib_wc_flags {
 IB_WC_GRH = 1,
 IB_WC_WITH_IMM = (1<<1),
 IB_WC_WITH_INVALIDATE = (1<<2),
 IB_WC_IP_CSUM_OK = (1<<3),
 IB_WC_WITH_SMAC = (1<<4),
 IB_WC_WITH_VLAN = (1<<5),
 IB_WC_WITH_NETWORK_HDR_TYPE = (1<<6),
};

struct ib_wc {
 union {
  u64 wr_id;
  struct ib_cqe *wr_cqe;
 };
 enum ib_wc_status status;
 enum ib_wc_opcode opcode;
 u32 vendor_err;
 u32 byte_len;
 struct ib_qp *qp;
 union {
  __be32 imm_data;
  u32 invalidate_rkey;
 } ex;
 u32 src_qp;
 u32 slid;
 int wc_flags;
 u16 pkey_index;
 u8 sl;
 u8 dlid_path_bits;
 u32 port_num;
 u8 smac[6];
 u16 vlan_id;
 u8 network_hdr_type;
};

enum ib_cq_notify_flags {
 IB_CQ_SOLICITED = 1 << 0,
 IB_CQ_NEXT_COMP = 1 << 1,
 IB_CQ_SOLICITED_MASK = IB_CQ_SOLICITED | IB_CQ_NEXT_COMP,
 IB_CQ_REPORT_MISSED_EVENTS = 1 << 2,
};

enum ib_srq_type {
 IB_SRQT_BASIC = IB_UVERBS_SRQT_BASIC,
 IB_SRQT_XRC = IB_UVERBS_SRQT_XRC,
 IB_SRQT_TM = IB_UVERBS_SRQT_TM,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_srq_has_cq(enum ib_srq_type srq_type)
{
 return srq_type == IB_SRQT_XRC ||
        srq_type == IB_SRQT_TM;
}

enum ib_srq_attr_mask {
 IB_SRQ_MAX_WR = 1 << 0,
 IB_SRQ_LIMIT = 1 << 1,
};

struct ib_srq_attr {
 u32 max_wr;
 u32 max_sge;
 u32 srq_limit;
};

struct ib_srq_init_attr {
 void (*event_handler)(struct ib_event *, void *);
 void *srq_context;
 struct ib_srq_attr attr;
 enum ib_srq_type srq_type;

 struct {
  struct ib_cq *cq;
  union {
   struct {
    struct ib_xrcd *xrcd;
   } xrc;

   struct {
    u32 max_num_tags;
   } tag_matching;
  };
 } ext;
};

struct ib_qp_cap {
 u32 max_send_wr;
 u32 max_recv_wr;
 u32 max_send_sge;
 u32 max_recv_sge;
 u32 max_inline_data;






 u32 max_rdma_ctxs;
};

enum ib_sig_type {
 IB_SIGNAL_ALL_WR,
 IB_SIGNAL_REQ_WR
};

enum ib_qp_type {





 IB_QPT_SMI,
 IB_QPT_GSI,

 IB_QPT_RC = IB_UVERBS_QPT_RC,
 IB_QPT_UC = IB_UVERBS_QPT_UC,
 IB_QPT_UD = IB_UVERBS_QPT_UD,
 IB_QPT_RAW_IPV6,
 IB_QPT_RAW_ETHERTYPE,
 IB_QPT_RAW_PACKET = IB_UVERBS_QPT_RAW_PACKET,
 IB_QPT_XRC_INI = IB_UVERBS_QPT_XRC_INI,
 IB_QPT_XRC_TGT = IB_UVERBS_QPT_XRC_TGT,
 IB_QPT_MAX,
 IB_QPT_DRIVER = IB_UVERBS_QPT_DRIVER,




 IB_QPT_RESERVED1 = 0x1000,
 IB_QPT_RESERVED2,
 IB_QPT_RESERVED3,
 IB_QPT_RESERVED4,
 IB_QPT_RESERVED5,
 IB_QPT_RESERVED6,
 IB_QPT_RESERVED7,
 IB_QPT_RESERVED8,
 IB_QPT_RESERVED9,
 IB_QPT_RESERVED10,
};

enum ib_qp_create_flags {
 IB_QP_CREATE_IPOIB_UD_LSO = 1 << 0,
 IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK =
  IB_UVERBS_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
 IB_QP_CREATE_CROSS_CHANNEL = 1 << 2,
 IB_QP_CREATE_MANAGED_SEND = 1 << 3,
 IB_QP_CREATE_MANAGED_RECV = 1 << 4,
 IB_QP_CREATE_NETIF_QP = 1 << 5,
 IB_QP_CREATE_INTEGRITY_EN = 1 << 6,
 IB_QP_CREATE_NETDEV_USE = 1 << 7,
 IB_QP_CREATE_SCATTER_FCS =
  IB_UVERBS_QP_CREATE_SCATTER_FCS,
 IB_QP_CREATE_CVLAN_STRIPPING =
  IB_UVERBS_QP_CREATE_CVLAN_STRIPPING,
 IB_QP_CREATE_SOURCE_QPN = 1 << 10,
 IB_QP_CREATE_PCI_WRITE_END_PADDING =
  IB_UVERBS_QP_CREATE_PCI_WRITE_END_PADDING,

 IB_QP_CREATE_RESERVED_START = 1 << 26,
 IB_QP_CREATE_RESERVED_END = 1 << 31,
};






struct ib_qp_init_attr {

 void (*event_handler)(struct ib_event *, void *);

 void *qp_context;
 struct ib_cq *send_cq;
 struct ib_cq *recv_cq;
 struct ib_srq *srq;
 struct ib_xrcd *xrcd;
 struct ib_qp_cap cap;
 enum ib_sig_type sq_sig_type;
 enum ib_qp_type qp_type;
 u32 create_flags;




 u32 port_num;
 struct ib_rwq_ind_table *rwq_ind_tbl;
 u32 source_qpn;
};

struct ib_qp_open_attr {
 void (*event_handler)(struct ib_event *, void *);
 void *qp_context;
 u32 qp_num;
 enum ib_qp_type qp_type;
};

enum ib_rnr_timeout {
 IB_RNR_TIMER_655_36 = 0,
 IB_RNR_TIMER_000_01 = 1,
 IB_RNR_TIMER_000_02 = 2,
 IB_RNR_TIMER_000_03 = 3,
 IB_RNR_TIMER_000_04 = 4,
 IB_RNR_TIMER_000_06 = 5,
 IB_RNR_TIMER_000_08 = 6,
 IB_RNR_TIMER_000_12 = 7,
 IB_RNR_TIMER_000_16 = 8,
 IB_RNR_TIMER_000_24 = 9,
 IB_RNR_TIMER_000_32 = 10,
 IB_RNR_TIMER_000_48 = 11,
 IB_RNR_TIMER_000_64 = 12,
 IB_RNR_TIMER_000_96 = 13,
 IB_RNR_TIMER_001_28 = 14,
 IB_RNR_TIMER_001_92 = 15,
 IB_RNR_TIMER_002_56 = 16,
 IB_RNR_TIMER_003_84 = 17,
 IB_RNR_TIMER_005_12 = 18,
 IB_RNR_TIMER_007_68 = 19,
 IB_RNR_TIMER_010_24 = 20,
 IB_RNR_TIMER_015_36 = 21,
 IB_RNR_TIMER_020_48 = 22,
 IB_RNR_TIMER_030_72 = 23,
 IB_RNR_TIMER_040_96 = 24,
 IB_RNR_TIMER_061_44 = 25,
 IB_RNR_TIMER_081_92 = 26,
 IB_RNR_TIMER_122_88 = 27,
 IB_RNR_TIMER_163_84 = 28,
 IB_RNR_TIMER_245_76 = 29,
 IB_RNR_TIMER_327_68 = 30,
 IB_RNR_TIMER_491_52 = 31
};

enum ib_qp_attr_mask {
 IB_QP_STATE = 1,
 IB_QP_CUR_STATE = (1<<1),
 IB_QP_EN_SQD_ASYNC_NOTIFY = (1<<2),
 IB_QP_ACCESS_FLAGS = (1<<3),
 IB_QP_PKEY_INDEX = (1<<4),
 IB_QP_PORT = (1<<5),
 IB_QP_QKEY = (1<<6),
 IB_QP_AV = (1<<7),
 IB_QP_PATH_MTU = (1<<8),
 IB_QP_TIMEOUT = (1<<9),
 IB_QP_RETRY_CNT = (1<<10),
 IB_QP_RNR_RETRY = (1<<11),
 IB_QP_RQ_PSN = (1<<12),
 IB_QP_MAX_QP_RD_ATOMIC = (1<<13),
 IB_QP_ALT_PATH = (1<<14),
 IB_QP_MIN_RNR_TIMER = (1<<15),
 IB_QP_SQ_PSN = (1<<16),
 IB_QP_MAX_DEST_RD_ATOMIC = (1<<17),
 IB_QP_PATH_MIG_STATE = (1<<18),
 IB_QP_CAP = (1<<19),
 IB_QP_DEST_QPN = (1<<20),
 IB_QP_RESERVED1 = (1<<21),
 IB_QP_RESERVED2 = (1<<22),
 IB_QP_RESERVED3 = (1<<23),
 IB_QP_RESERVED4 = (1<<24),
 IB_QP_RATE_LIMIT = (1<<25),

 IB_QP_ATTR_STANDARD_BITS = ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (20)) * 0l)) : (int *)8))), (0) > (20), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (20))))),
};

enum ib_qp_state {
 IB_QPS_RESET,
 IB_QPS_INIT,
 IB_QPS_RTR,
 IB_QPS_RTS,
 IB_QPS_SQD,
 IB_QPS_SQE,
 IB_QPS_ERR
};

enum ib_mig_state {
 IB_MIG_MIGRATED,
 IB_MIG_REARM,
 IB_MIG_ARMED
};

enum ib_mw_type {
 IB_MW_TYPE_1 = 1,
 IB_MW_TYPE_2 = 2
};

struct ib_qp_attr {
 enum ib_qp_state qp_state;
 enum ib_qp_state cur_qp_state;
 enum ib_mtu path_mtu;
 enum ib_mig_state path_mig_state;
 u32 qkey;
 u32 rq_psn;
 u32 sq_psn;
 u32 dest_qp_num;
 int qp_access_flags;
 struct ib_qp_cap cap;
 struct rdma_ah_attr ah_attr;
 struct rdma_ah_attr alt_ah_attr;
 u16 pkey_index;
 u16 alt_pkey_index;
 u8 en_sqd_async_notify;
 u8 sq_draining;
 u8 max_rd_atomic;
 u8 max_dest_rd_atomic;
 u8 min_rnr_timer;
 u32 port_num;
 u8 timeout;
 u8 retry_cnt;
 u8 rnr_retry;
 u32 alt_port_num;
 u8 alt_timeout;
 u32 rate_limit;
 struct net_device *xmit_slave;
};

enum ib_wr_opcode {

 IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE,
 IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM,
 IB_WR_SEND = IB_UVERBS_WR_SEND,
 IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM,
 IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
 IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
 IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
 IB_WR_BIND_MW = IB_UVERBS_WR_BIND_MW,
 IB_WR_LSO = IB_UVERBS_WR_TSO,
 IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
 IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
 IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV,
 IB_WR_MASKED_ATOMIC_CMP_AND_SWP =
  IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP,
 IB_WR_MASKED_ATOMIC_FETCH_AND_ADD =
  IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD,
 IB_WR_FLUSH = IB_UVERBS_WR_FLUSH,
 IB_WR_ATOMIC_WRITE = IB_UVERBS_WR_ATOMIC_WRITE,


 IB_WR_REG_MR = 0x20,
 IB_WR_REG_MR_INTEGRITY,




 IB_WR_RESERVED1 = 0xf0,
 IB_WR_RESERVED2,
 IB_WR_RESERVED3,
 IB_WR_RESERVED4,
 IB_WR_RESERVED5,
 IB_WR_RESERVED6,
 IB_WR_RESERVED7,
 IB_WR_RESERVED8,
 IB_WR_RESERVED9,
 IB_WR_RESERVED10,
};

enum ib_send_flags {
 IB_SEND_FENCE = 1,
 IB_SEND_SIGNALED = (1<<1),
 IB_SEND_SOLICITED = (1<<2),
 IB_SEND_INLINE = (1<<3),
 IB_SEND_IP_CSUM = (1<<4),


 IB_SEND_RESERVED_START = (1 << 26),
 IB_SEND_RESERVED_END = (1 << 31),
};

struct ib_sge {
 u64 addr;
 u32 length;
 u32 lkey;
};

struct ib_cqe {
 void (*done)(struct ib_cq *cq, struct ib_wc *wc);
};

struct ib_send_wr {
 struct ib_send_wr *next;
 union {
  u64 wr_id;
  struct ib_cqe *wr_cqe;
 };
 struct ib_sge *sg_list;
 int num_sge;
 enum ib_wr_opcode opcode;
 int send_flags;
 union {
  __be32 imm_data;
  u32 invalidate_rkey;
 } ex;
};

struct ib_rdma_wr {
 struct ib_send_wr wr;
 u64 remote_addr;
 u32 rkey;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct ib_rdma_wr *rdma_wr(const struct ib_send_wr *wr)
{
 return ({ void *__mptr = (void *)(wr); _Static_assert(__builtin_types_compatible_p(typeof(*(wr)), typeof(((struct ib_rdma_wr *)0)->wr)) || __builtin_types_compatible_p(typeof(*(wr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_rdma_wr *)(__mptr - __builtin_offsetof(struct ib_rdma_wr, wr))); });
}

struct ib_atomic_wr {
 struct ib_send_wr wr;
 u64 remote_addr;
 u64 compare_add;
 u64 swap;
 u64 compare_add_mask;
 u64 swap_mask;
 u32 rkey;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct ib_atomic_wr *atomic_wr(const struct ib_send_wr *wr)
{
 return ({ void *__mptr = (void *)(wr); _Static_assert(__builtin_types_compatible_p(typeof(*(wr)), typeof(((struct ib_atomic_wr *)0)->wr)) || __builtin_types_compatible_p(typeof(*(wr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_atomic_wr *)(__mptr - __builtin_offsetof(struct ib_atomic_wr, wr))); });
}

struct ib_ud_wr {
 struct ib_send_wr wr;
 struct ib_ah *ah;
 void *header;
 int hlen;
 int mss;
 u32 remote_qpn;
 u32 remote_qkey;
 u16 pkey_index;
 u32 port_num;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct ib_ud_wr *ud_wr(const struct ib_send_wr *wr)
{
 return ({ void *__mptr = (void *)(wr); _Static_assert(__builtin_types_compatible_p(typeof(*(wr)), typeof(((struct ib_ud_wr *)0)->wr)) || __builtin_types_compatible_p(typeof(*(wr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_ud_wr *)(__mptr - __builtin_offsetof(struct ib_ud_wr, wr))); });
}

struct ib_reg_wr {
 struct ib_send_wr wr;
 struct ib_mr *mr;
 u32 key;
 int access;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct ib_reg_wr *reg_wr(const struct ib_send_wr *wr)
{
 return ({ void *__mptr = (void *)(wr); _Static_assert(__builtin_types_compatible_p(typeof(*(wr)), typeof(((struct ib_reg_wr *)0)->wr)) || __builtin_types_compatible_p(typeof(*(wr)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_reg_wr *)(__mptr - __builtin_offsetof(struct ib_reg_wr, wr))); });
}

struct ib_recv_wr {
 struct ib_recv_wr *next;
 union {
  u64 wr_id;
  struct ib_cqe *wr_cqe;
 };
 struct ib_sge *sg_list;
 int num_sge;
};

enum ib_access_flags {
 IB_ACCESS_LOCAL_WRITE = IB_UVERBS_ACCESS_LOCAL_WRITE,
 IB_ACCESS_REMOTE_WRITE = IB_UVERBS_ACCESS_REMOTE_WRITE,
 IB_ACCESS_REMOTE_READ = IB_UVERBS_ACCESS_REMOTE_READ,
 IB_ACCESS_REMOTE_ATOMIC = IB_UVERBS_ACCESS_REMOTE_ATOMIC,
 IB_ACCESS_MW_BIND = IB_UVERBS_ACCESS_MW_BIND,
 IB_ZERO_BASED = IB_UVERBS_ACCESS_ZERO_BASED,
 IB_ACCESS_ON_DEMAND = IB_UVERBS_ACCESS_ON_DEMAND,
 IB_ACCESS_HUGETLB = IB_UVERBS_ACCESS_HUGETLB,
 IB_ACCESS_RELAXED_ORDERING = IB_UVERBS_ACCESS_RELAXED_ORDERING,
 IB_ACCESS_FLUSH_GLOBAL = IB_UVERBS_ACCESS_FLUSH_GLOBAL,
 IB_ACCESS_FLUSH_PERSISTENT = IB_UVERBS_ACCESS_FLUSH_PERSISTENT,

 IB_ACCESS_OPTIONAL = IB_UVERBS_ACCESS_OPTIONAL_RANGE,
 IB_ACCESS_SUPPORTED =
  ((IB_ACCESS_FLUSH_PERSISTENT << 1) - 1) | IB_ACCESS_OPTIONAL,
};





enum ib_mr_rereg_flags {
 IB_MR_REREG_TRANS = 1,
 IB_MR_REREG_PD = (1<<1),
 IB_MR_REREG_ACCESS = (1<<2),
 IB_MR_REREG_SUPPORTED = ((IB_MR_REREG_ACCESS << 1) - 1)
};

struct ib_umem;

enum rdma_remove_reason {




 RDMA_REMOVE_DESTROY,

 RDMA_REMOVE_CLOSE,

 RDMA_REMOVE_DRIVER_REMOVE,

 RDMA_REMOVE_ABORT,

 RDMA_REMOVE_DRIVER_FAILURE,
};

struct ib_rdmacg_object {



};

struct ib_ucontext {
 struct ib_device *device;
 struct ib_uverbs_file *ufile;

 struct ib_rdmacg_object cg_obj;



 struct rdma_restrack_entry res;
 struct xarray mmap_xa;
};

struct ib_uobject {
 u64 user_handle;

 struct ib_uverbs_file *ufile;

 struct ib_ucontext *context;
 void *object;
 struct list_head list;
 struct ib_rdmacg_object cg_obj;
 int id;
 struct kref ref;
 atomic_t usecnt;
 struct callback_head rcu;

 const struct uverbs_api_object *uapi_object;
};

struct ib_udata {
 const void *inbuf;
 void *outbuf;
 size_t inlen;
 size_t outlen;
};

struct ib_pd {
 u32 local_dma_lkey;
 u32 flags;
 struct ib_device *device;
 struct ib_uobject *uobject;
 atomic_t usecnt;

 u32 unsafe_global_rkey;




 struct ib_mr *__internal_mr;
 struct rdma_restrack_entry res;
};

struct ib_xrcd {
 struct ib_device *device;
 atomic_t usecnt;
 struct inode *inode;
 struct rw_semaphore tgt_qps_rwsem;
 struct xarray tgt_qps;
};

struct ib_ah {
 struct ib_device *device;
 struct ib_pd *pd;
 struct ib_uobject *uobject;
 const struct ib_gid_attr *sgid_attr;
 enum rdma_ah_attr_type type;
};

typedef void (*ib_comp_handler)(struct ib_cq *cq, void *cq_context);

enum ib_poll_context {
 IB_POLL_SOFTIRQ,
 IB_POLL_WORKQUEUE,
 IB_POLL_UNBOUND_WORKQUEUE,
 IB_POLL_LAST_POOL_TYPE = IB_POLL_UNBOUND_WORKQUEUE,

 IB_POLL_DIRECT,
};

struct ib_cq {
 struct ib_device *device;
 struct ib_ucq_object *uobject;
 ib_comp_handler comp_handler;
 void (*event_handler)(struct ib_event *, void *);
 void *cq_context;
 int cqe;
 unsigned int cqe_used;
 atomic_t usecnt;
 enum ib_poll_context poll_ctx;
 struct ib_wc *wc;
 struct list_head pool_entry;
 union {
  struct irq_poll iop;
  struct work_struct work;
 };
 struct workqueue_struct *comp_wq;
 struct dim *dim;


 ktime_t timestamp;
 u8 interrupt:1;
 u8 shared:1;
 unsigned int comp_vector;




 struct rdma_restrack_entry res;
};

struct ib_srq {
 struct ib_device *device;
 struct ib_pd *pd;
 struct ib_usrq_object *uobject;
 void (*event_handler)(struct ib_event *, void *);
 void *srq_context;
 enum ib_srq_type srq_type;
 atomic_t usecnt;

 struct {
  struct ib_cq *cq;
  union {
   struct {
    struct ib_xrcd *xrcd;
    u32 srq_num;
   } xrc;
  };
 } ext;




 struct rdma_restrack_entry res;
};

enum ib_raw_packet_caps {




 IB_RAW_PACKET_CAP_CVLAN_STRIPPING =
  IB_UVERBS_RAW_PACKET_CAP_CVLAN_STRIPPING,



 IB_RAW_PACKET_CAP_SCATTER_FCS = IB_UVERBS_RAW_PACKET_CAP_SCATTER_FCS,

 IB_RAW_PACKET_CAP_IP_CSUM = IB_UVERBS_RAW_PACKET_CAP_IP_CSUM,




 IB_RAW_PACKET_CAP_DELAY_DROP = IB_UVERBS_RAW_PACKET_CAP_DELAY_DROP,
};

enum ib_wq_type {
 IB_WQT_RQ = IB_UVERBS_WQT_RQ,
};

enum ib_wq_state {
 IB_WQS_RESET,
 IB_WQS_RDY,
 IB_WQS_ERR
};

struct ib_wq {
 struct ib_device *device;
 struct ib_uwq_object *uobject;
 void *wq_context;
 void (*event_handler)(struct ib_event *, void *);
 struct ib_pd *pd;
 struct ib_cq *cq;
 u32 wq_num;
 enum ib_wq_state state;
 enum ib_wq_type wq_type;
 atomic_t usecnt;
};

enum ib_wq_flags {
 IB_WQ_FLAGS_CVLAN_STRIPPING = IB_UVERBS_WQ_FLAGS_CVLAN_STRIPPING,
 IB_WQ_FLAGS_SCATTER_FCS = IB_UVERBS_WQ_FLAGS_SCATTER_FCS,
 IB_WQ_FLAGS_DELAY_DROP = IB_UVERBS_WQ_FLAGS_DELAY_DROP,
 IB_WQ_FLAGS_PCI_WRITE_END_PADDING =
    IB_UVERBS_WQ_FLAGS_PCI_WRITE_END_PADDING,
};

struct ib_wq_init_attr {
 void *wq_context;
 enum ib_wq_type wq_type;
 u32 max_wr;
 u32 max_sge;
 struct ib_cq *cq;
 void (*event_handler)(struct ib_event *, void *);
 u32 create_flags;
};

enum ib_wq_attr_mask {
 IB_WQ_STATE = 1 << 0,
 IB_WQ_CUR_STATE = 1 << 1,
 IB_WQ_FLAGS = 1 << 2,
};

struct ib_wq_attr {
 enum ib_wq_state wq_state;
 enum ib_wq_state curr_wq_state;
 u32 flags;
 u32 flags_mask;
};

struct ib_rwq_ind_table {
 struct ib_device *device;
 struct ib_uobject *uobject;
 atomic_t usecnt;
 u32 ind_tbl_num;
 u32 log_ind_tbl_size;
 struct ib_wq **ind_tbl;
};

struct ib_rwq_ind_table_init_attr {
 u32 log_ind_tbl_size;

 struct ib_wq **ind_tbl;
};

enum port_pkey_state {
 IB_PORT_PKEY_NOT_VALID = 0,
 IB_PORT_PKEY_VALID = 1,
 IB_PORT_PKEY_LISTED = 2,
};

struct ib_qp_security;

struct ib_port_pkey {
 enum port_pkey_state state;
 u16 pkey_index;
 u32 port_num;
 struct list_head qp_list;
 struct list_head to_error_list;
 struct ib_qp_security *sec;
};

struct ib_ports_pkeys {
 struct ib_port_pkey main;
 struct ib_port_pkey alt;
};

struct ib_qp_security {
 struct ib_qp *qp;
 struct ib_device *dev;

 struct mutex mutex;
 struct ib_ports_pkeys *ports_pkeys;



 struct list_head shared_qp_list;
 void *security;
 bool destroying;
 atomic_t error_list_count;
 struct completion error_complete;
 int error_comps_pending;
};





struct ib_qp {
 struct ib_device *device;
 struct ib_pd *pd;
 struct ib_cq *send_cq;
 struct ib_cq *recv_cq;
 spinlock_t mr_lock;
 int mrs_used;
 struct list_head rdma_mrs;
 struct list_head sig_mrs;
 struct ib_srq *srq;
 struct completion srq_completion;
 struct ib_xrcd *xrcd;
 struct list_head xrcd_list;


 atomic_t usecnt;
 struct list_head open_list;
 struct ib_qp *real_qp;
 struct ib_uqp_object *uobject;
 void (*event_handler)(struct ib_event *, void *);
 void (*registered_event_handler)(struct ib_event *, void *);
 void *qp_context;

 const struct ib_gid_attr *av_sgid_attr;
 const struct ib_gid_attr *alt_path_sgid_attr;
 u32 qp_num;
 u32 max_write_sge;
 u32 max_read_sge;
 enum ib_qp_type qp_type;
 struct ib_rwq_ind_table *rwq_ind_tbl;
 struct ib_qp_security *qp_sec;
 u32 port;

 bool integrity_en;



 struct rdma_restrack_entry res;


 struct rdma_counter *counter;
};

struct ib_dm {
 struct ib_device *device;
 u32 length;
 u32 flags;
 struct ib_uobject *uobject;
 atomic_t usecnt;
};

struct ib_mr {
 struct ib_device *device;
 struct ib_pd *pd;
 u32 lkey;
 u32 rkey;
 u64 iova;
 u64 length;
 unsigned int page_size;
 enum ib_mr_type type;
 bool need_inval;
 union {
  struct ib_uobject *uobject;
  struct list_head qp_entry;
 };

 struct ib_dm *dm;
 struct ib_sig_attrs *sig_attrs;



 struct rdma_restrack_entry res;
};

struct ib_mw {
 struct ib_device *device;
 struct ib_pd *pd;
 struct ib_uobject *uobject;
 u32 rkey;
 enum ib_mw_type type;
};


enum ib_flow_attr_type {

 IB_FLOW_ATTR_NORMAL = 0x0,



 IB_FLOW_ATTR_ALL_DEFAULT = 0x1,



 IB_FLOW_ATTR_MC_DEFAULT = 0x2,

 IB_FLOW_ATTR_SNIFFER = 0x3
};


enum ib_flow_spec_type {

 IB_FLOW_SPEC_ETH = 0x20,
 IB_FLOW_SPEC_IB = 0x22,

 IB_FLOW_SPEC_IPV4 = 0x30,
 IB_FLOW_SPEC_IPV6 = 0x31,
 IB_FLOW_SPEC_ESP = 0x34,

 IB_FLOW_SPEC_TCP = 0x40,
 IB_FLOW_SPEC_UDP = 0x41,
 IB_FLOW_SPEC_VXLAN_TUNNEL = 0x50,
 IB_FLOW_SPEC_GRE = 0x51,
 IB_FLOW_SPEC_MPLS = 0x60,
 IB_FLOW_SPEC_INNER = 0x100,

 IB_FLOW_SPEC_ACTION_TAG = 0x1000,
 IB_FLOW_SPEC_ACTION_DROP = 0x1001,
 IB_FLOW_SPEC_ACTION_HANDLE = 0x1002,
 IB_FLOW_SPEC_ACTION_COUNT = 0x1003,
};



enum ib_flow_flags {
 IB_FLOW_ATTR_FLAGS_DONT_TRAP = 1UL << 1,
 IB_FLOW_ATTR_FLAGS_EGRESS = 1UL << 2,
 IB_FLOW_ATTR_FLAGS_RESERVED = 1UL << 3
};

struct ib_flow_eth_filter {
 u8 dst_mac[6];
 u8 src_mac[6];
 __be16 ether_type;
 __be16 vlan_tag;
};

struct ib_flow_spec_eth {
 u32 type;
 u16 size;
 struct ib_flow_eth_filter val;
 struct ib_flow_eth_filter mask;
};

struct ib_flow_ib_filter {
 __be16 dlid;
 __u8 sl;
};

struct ib_flow_spec_ib {
 u32 type;
 u16 size;
 struct ib_flow_ib_filter val;
 struct ib_flow_ib_filter mask;
};


enum ib_ipv4_flags {
 IB_IPV4_DONT_FRAG = 0x2,
 IB_IPV4_MORE_FRAG = 0X4

};

struct ib_flow_ipv4_filter {
 __be32 src_ip;
 __be32 dst_ip;
 u8 proto;
 u8 tos;
 u8 ttl;
 u8 flags;
};

struct ib_flow_spec_ipv4 {
 u32 type;
 u16 size;
 struct ib_flow_ipv4_filter val;
 struct ib_flow_ipv4_filter mask;
};

struct ib_flow_ipv6_filter {
 u8 src_ip[16];
 u8 dst_ip[16];
 __be32 flow_label;
 u8 next_hdr;
 u8 traffic_class;
 u8 hop_limit;
} __attribute__((__packed__));

struct ib_flow_spec_ipv6 {
 u32 type;
 u16 size;
 struct ib_flow_ipv6_filter val;
 struct ib_flow_ipv6_filter mask;
};

struct ib_flow_tcp_udp_filter {
 __be16 dst_port;
 __be16 src_port;
};

struct ib_flow_spec_tcp_udp {
 u32 type;
 u16 size;
 struct ib_flow_tcp_udp_filter val;
 struct ib_flow_tcp_udp_filter mask;
};

struct ib_flow_tunnel_filter {
 __be32 tunnel_id;
};




struct ib_flow_spec_tunnel {
 u32 type;
 u16 size;
 struct ib_flow_tunnel_filter val;
 struct ib_flow_tunnel_filter mask;
};

struct ib_flow_esp_filter {
 __be32 spi;
 __be32 seq;
};

struct ib_flow_spec_esp {
 u32 type;
 u16 size;
 struct ib_flow_esp_filter val;
 struct ib_flow_esp_filter mask;
};

struct ib_flow_gre_filter {
 __be16 c_ks_res0_ver;
 __be16 protocol;
 __be32 key;
};

struct ib_flow_spec_gre {
 u32 type;
 u16 size;
 struct ib_flow_gre_filter val;
 struct ib_flow_gre_filter mask;
};

struct ib_flow_mpls_filter {
 __be32 tag;
};

struct ib_flow_spec_mpls {
 u32 type;
 u16 size;
 struct ib_flow_mpls_filter val;
 struct ib_flow_mpls_filter mask;
};

struct ib_flow_spec_action_tag {
 enum ib_flow_spec_type type;
 u16 size;
 u32 tag_id;
};

struct ib_flow_spec_action_drop {
 enum ib_flow_spec_type type;
 u16 size;
};

struct ib_flow_spec_action_handle {
 enum ib_flow_spec_type type;
 u16 size;
 struct ib_flow_action *act;
};

enum ib_counters_description {
 IB_COUNTER_PACKETS,
 IB_COUNTER_BYTES,
};

struct ib_flow_spec_action_count {
 enum ib_flow_spec_type type;
 u16 size;
 struct ib_counters *counters;
};

union ib_flow_spec {
 struct {
  u32 type;
  u16 size;
 };
 struct ib_flow_spec_eth eth;
 struct ib_flow_spec_ib ib;
 struct ib_flow_spec_ipv4 ipv4;
 struct ib_flow_spec_tcp_udp tcp_udp;
 struct ib_flow_spec_ipv6 ipv6;
 struct ib_flow_spec_tunnel tunnel;
 struct ib_flow_spec_esp esp;
 struct ib_flow_spec_gre gre;
 struct ib_flow_spec_mpls mpls;
 struct ib_flow_spec_action_tag flow_tag;
 struct ib_flow_spec_action_drop drop;
 struct ib_flow_spec_action_handle action;
 struct ib_flow_spec_action_count flow_count;
};

struct ib_flow_attr {
 enum ib_flow_attr_type type;
 u16 size;
 u16 priority;
 u32 flags;
 u8 num_of_specs;
 u32 port;
 union ib_flow_spec flows[];
};

struct ib_flow {
 struct ib_qp *qp;
 struct ib_device *device;
 struct ib_uobject *uobject;
};

enum ib_flow_action_type {
 IB_FLOW_ACTION_UNSPECIFIED,
 IB_FLOW_ACTION_ESP = 1,
};

struct ib_flow_action_attrs_esp_keymats {
 enum ib_uverbs_flow_action_esp_keymat protocol;
 union {
  struct ib_uverbs_flow_action_esp_keymat_aes_gcm aes_gcm;
 } keymat;
};

struct ib_flow_action_attrs_esp_replays {
 enum ib_uverbs_flow_action_esp_replay protocol;
 union {
  struct ib_uverbs_flow_action_esp_replay_bmp bmp;
 } replay;
};

enum ib_flow_action_attrs_esp_flags {






 IB_FLOW_ACTION_ESP_FLAGS_ESN_TRIGGERED = 1ULL << 32,
 IB_FLOW_ACTION_ESP_FLAGS_MOD_ESP_ATTRS = 1ULL << 33,
};

struct ib_flow_spec_list {
 struct ib_flow_spec_list *next;
 union ib_flow_spec spec;
};

struct ib_flow_action_attrs_esp {
 struct ib_flow_action_attrs_esp_keymats *keymat;
 struct ib_flow_action_attrs_esp_replays *replay;
 struct ib_flow_spec_list *encap;



 u32 esn;
 u32 spi;
 u32 seq;
 u32 tfc_pad;

 u64 flags;
 u64 hard_limit_pkts;
};

struct ib_flow_action {
 struct ib_device *device;
 struct ib_uobject *uobject;
 enum ib_flow_action_type type;
 atomic_t usecnt;
};

struct ib_mad;

enum ib_process_mad_flags {
 IB_MAD_IGNORE_MKEY = 1,
 IB_MAD_IGNORE_BKEY = 2,
 IB_MAD_IGNORE_ALL = IB_MAD_IGNORE_MKEY | IB_MAD_IGNORE_BKEY
};

enum ib_mad_result {
 IB_MAD_RESULT_FAILURE = 0,
 IB_MAD_RESULT_SUCCESS = 1 << 0,
 IB_MAD_RESULT_REPLY = 1 << 1,
 IB_MAD_RESULT_CONSUMED = 1 << 2
};

struct ib_port_cache {
 u64 subnet_prefix;
 struct ib_pkey_cache *pkey;
 struct ib_gid_table *gid;
 u8 lmc;
 enum ib_port_state port_state;
};

struct ib_port_immutable {
 int pkey_tbl_len;
 int gid_tbl_len;
 u32 core_cap_flags;
 u32 max_mad_size;
};

struct ib_port_data {
 struct ib_device *ib_dev;

 struct ib_port_immutable immutable;

 spinlock_t pkey_list_lock;

 spinlock_t netdev_lock;

 struct list_head pkey_list;

 struct ib_port_cache cache;

 struct net_device *netdev;
 netdevice_tracker netdev_tracker;
 struct hlist_node ndev_hash_link;
 struct rdma_port_counter port_counter;
 struct ib_port *sysfs;
};


enum rdma_netdev_t {
 RDMA_NETDEV_OPA_VNIC,
 RDMA_NETDEV_IPOIB,
};





struct rdma_netdev {
 void *clnt_priv;
 struct ib_device *hca;
 u32 port_num;
 int mtu;






 void (*free_rdma_netdev)(struct net_device *netdev);


 void (*set_id)(struct net_device *netdev, int id);

 int (*send)(struct net_device *dev, struct sk_buff *skb,
      struct ib_ah *address, u32 dqpn);

 int (*attach_mcast)(struct net_device *dev, struct ib_device *hca,
       union ib_gid *gid, u16 mlid,
       int set_qkey, u32 qkey);
 int (*detach_mcast)(struct net_device *dev, struct ib_device *hca,
       union ib_gid *gid, u16 mlid);

 void (*tx_timeout)(struct net_device *dev, unsigned int txqueue);
};

struct rdma_netdev_alloc_params {
 size_t sizeof_priv;
 unsigned int txqs;
 unsigned int rxqs;
 void *param;

 int (*initialize_rdma_netdev)(struct ib_device *device, u32 port_num,
          struct net_device *netdev, void *param);
};

struct ib_odp_counters {
 atomic64_t faults;
 atomic64_t invalidations;
 atomic64_t prefetch;
};

struct ib_counters {
 struct ib_device *device;
 struct ib_uobject *uobject;

 atomic_t usecnt;
};

struct ib_counters_read_attr {
 u64 *counters_buff;
 u32 ncounters;
 u32 flags;
};

struct uverbs_attr_bundle;
struct iw_cm_id;
struct iw_cm_conn_param;
# 2301 "../include/rdma/ib_verbs.h"
struct rdma_user_mmap_entry {
 struct kref ref;
 struct ib_ucontext *ucontext;
 unsigned long start_pgoff;
 size_t npages;
 bool driver_removed;
};


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64
rdma_user_mmap_get_offset(const struct rdma_user_mmap_entry *entry)
{
 return (u64)entry->start_pgoff << 14;
}






struct ib_device_ops {
 struct module *owner;
 enum rdma_driver_id driver_id;
 u32 uverbs_abi_ver;
 unsigned int uverbs_no_driver_id_binding:1;






 const struct attribute_group *device_group;
 const struct attribute_group **port_groups;

 int (*post_send)(struct ib_qp *qp, const struct ib_send_wr *send_wr,
    const struct ib_send_wr **bad_send_wr);
 int (*post_recv)(struct ib_qp *qp, const struct ib_recv_wr *recv_wr,
    const struct ib_recv_wr **bad_recv_wr);
 void (*drain_rq)(struct ib_qp *qp);
 void (*drain_sq)(struct ib_qp *qp);
 int (*poll_cq)(struct ib_cq *cq, int num_entries, struct ib_wc *wc);
 int (*peek_cq)(struct ib_cq *cq, int wc_cnt);
 int (*req_notify_cq)(struct ib_cq *cq, enum ib_cq_notify_flags flags);
 int (*post_srq_recv)(struct ib_srq *srq,
        const struct ib_recv_wr *recv_wr,
        const struct ib_recv_wr **bad_recv_wr);
 int (*process_mad)(struct ib_device *device, int process_mad_flags,
      u32 port_num, const struct ib_wc *in_wc,
      const struct ib_grh *in_grh,
      const struct ib_mad *in_mad, struct ib_mad *out_mad,
      size_t *out_mad_size, u16 *out_mad_pkey_index);
 int (*query_device)(struct ib_device *device,
       struct ib_device_attr *device_attr,
       struct ib_udata *udata);
 int (*modify_device)(struct ib_device *device, int device_modify_mask,
        struct ib_device_modify *device_modify);
 void (*get_dev_fw_str)(struct ib_device *device, char *str);
 const struct cpumask *(*get_vector_affinity)(struct ib_device *ibdev,
           int comp_vector);
 int (*query_port)(struct ib_device *device, u32 port_num,
     struct ib_port_attr *port_attr);
 int (*modify_port)(struct ib_device *device, u32 port_num,
      int port_modify_mask,
      struct ib_port_modify *port_modify);






 int (*get_port_immutable)(struct ib_device *device, u32 port_num,
      struct ib_port_immutable *immutable);
 enum rdma_link_layer (*get_link_layer)(struct ib_device *device,
            u32 port_num);
# 2383 "../include/rdma/ib_verbs.h"
 struct net_device *(*get_netdev)(struct ib_device *device,
      u32 port_num);






 struct net_device *(*alloc_rdma_netdev)(
  struct ib_device *device, u32 port_num, enum rdma_netdev_t type,
  const char *name, unsigned char name_assign_type,
  void (*setup)(struct net_device *));

 int (*rdma_netdev_get_params)(struct ib_device *device, u32 port_num,
          enum rdma_netdev_t type,
          struct rdma_netdev_alloc_params *params);





 int (*query_gid)(struct ib_device *device, u32 port_num, int index,
    union ib_gid *gid);
# 2419 "../include/rdma/ib_verbs.h"
 int (*add_gid)(const struct ib_gid_attr *attr, void **context);
# 2428 "../include/rdma/ib_verbs.h"
 int (*del_gid)(const struct ib_gid_attr *attr, void **context);
 int (*query_pkey)(struct ib_device *device, u32 port_num, u16 index,
     u16 *pkey);
 int (*alloc_ucontext)(struct ib_ucontext *context,
         struct ib_udata *udata);
 void (*dealloc_ucontext)(struct ib_ucontext *context);
 int (*mmap)(struct ib_ucontext *context, struct vm_area_struct *vma);






 void (*mmap_free)(struct rdma_user_mmap_entry *entry);
 void (*disassociate_ucontext)(struct ib_ucontext *ibcontext);
 int (*alloc_pd)(struct ib_pd *pd, struct ib_udata *udata);
 int (*dealloc_pd)(struct ib_pd *pd, struct ib_udata *udata);
 int (*create_ah)(struct ib_ah *ah, struct rdma_ah_init_attr *attr,
    struct ib_udata *udata);
 int (*create_user_ah)(struct ib_ah *ah, struct rdma_ah_init_attr *attr,
         struct ib_udata *udata);
 int (*modify_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);
 int (*query_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);
 int (*destroy_ah)(struct ib_ah *ah, u32 flags);
 int (*create_srq)(struct ib_srq *srq,
     struct ib_srq_init_attr *srq_init_attr,
     struct ib_udata *udata);
 int (*modify_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr,
     enum ib_srq_attr_mask srq_attr_mask,
     struct ib_udata *udata);
 int (*query_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr);
 int (*destroy_srq)(struct ib_srq *srq, struct ib_udata *udata);
 int (*create_qp)(struct ib_qp *qp, struct ib_qp_init_attr *qp_init_attr,
    struct ib_udata *udata);
 int (*modify_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
    int qp_attr_mask, struct ib_udata *udata);
 int (*query_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
   int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr);
 int (*destroy_qp)(struct ib_qp *qp, struct ib_udata *udata);
 int (*create_cq)(struct ib_cq *cq, const struct ib_cq_init_attr *attr,
    struct uverbs_attr_bundle *attrs);
 int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
 int (*destroy_cq)(struct ib_cq *cq, struct ib_udata *udata);
 int (*resize_cq)(struct ib_cq *cq, int cqe, struct ib_udata *udata);
 struct ib_mr *(*get_dma_mr)(struct ib_pd *pd, int mr_access_flags);
 struct ib_mr *(*reg_user_mr)(struct ib_pd *pd, u64 start, u64 length,
         u64 virt_addr, int mr_access_flags,
         struct ib_udata *udata);
 struct ib_mr *(*reg_user_mr_dmabuf)(struct ib_pd *pd, u64 offset,
         u64 length, u64 virt_addr, int fd,
         int mr_access_flags,
         struct ib_udata *udata);
 struct ib_mr *(*rereg_user_mr)(struct ib_mr *mr, int flags, u64 start,
           u64 length, u64 virt_addr,
           int mr_access_flags, struct ib_pd *pd,
           struct ib_udata *udata);
 int (*dereg_mr)(struct ib_mr *mr, struct ib_udata *udata);
 struct ib_mr *(*alloc_mr)(struct ib_pd *pd, enum ib_mr_type mr_type,
      u32 max_num_sg);
 struct ib_mr *(*alloc_mr_integrity)(struct ib_pd *pd,
         u32 max_num_data_sg,
         u32 max_num_meta_sg);
 int (*advise_mr)(struct ib_pd *pd,
    enum ib_uverbs_advise_mr_advice advice, u32 flags,
    struct ib_sge *sg_list, u32 num_sge,
    struct uverbs_attr_bundle *attrs);
# 2502 "../include/rdma/ib_verbs.h"
 int (*map_mr_sg)(struct ib_mr *mr, struct scatterlist *sg, int sg_nents,
    unsigned int *sg_offset);
 int (*check_mr_status)(struct ib_mr *mr, u32 check_mask,
          struct ib_mr_status *mr_status);
 int (*alloc_mw)(struct ib_mw *mw, struct ib_udata *udata);
 int (*dealloc_mw)(struct ib_mw *mw);
 int (*attach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid);
 int (*detach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid);
 int (*alloc_xrcd)(struct ib_xrcd *xrcd, struct ib_udata *udata);
 int (*dealloc_xrcd)(struct ib_xrcd *xrcd, struct ib_udata *udata);
 struct ib_flow *(*create_flow)(struct ib_qp *qp,
           struct ib_flow_attr *flow_attr,
           struct ib_udata *udata);
 int (*destroy_flow)(struct ib_flow *flow_id);
 int (*destroy_flow_action)(struct ib_flow_action *action);
 int (*set_vf_link_state)(struct ib_device *device, int vf, u32 port,
     int state);
 int (*get_vf_config)(struct ib_device *device, int vf, u32 port,
        struct ifla_vf_info *ivf);
 int (*get_vf_stats)(struct ib_device *device, int vf, u32 port,
       struct ifla_vf_stats *stats);
 int (*get_vf_guid)(struct ib_device *device, int vf, u32 port,
       struct ifla_vf_guid *node_guid,
       struct ifla_vf_guid *port_guid);
 int (*set_vf_guid)(struct ib_device *device, int vf, u32 port, u64 guid,
      int type);
 struct ib_wq *(*create_wq)(struct ib_pd *pd,
       struct ib_wq_init_attr *init_attr,
       struct ib_udata *udata);
 int (*destroy_wq)(struct ib_wq *wq, struct ib_udata *udata);
 int (*modify_wq)(struct ib_wq *wq, struct ib_wq_attr *attr,
    u32 wq_attr_mask, struct ib_udata *udata);
 int (*create_rwq_ind_table)(struct ib_rwq_ind_table *ib_rwq_ind_table,
        struct ib_rwq_ind_table_init_attr *init_attr,
        struct ib_udata *udata);
 int (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table);
 struct ib_dm *(*alloc_dm)(struct ib_device *device,
      struct ib_ucontext *context,
      struct ib_dm_alloc_attr *attr,
      struct uverbs_attr_bundle *attrs);
 int (*dealloc_dm)(struct ib_dm *dm, struct uverbs_attr_bundle *attrs);
 struct ib_mr *(*reg_dm_mr)(struct ib_pd *pd, struct ib_dm *dm,
       struct ib_dm_mr_attr *attr,
       struct uverbs_attr_bundle *attrs);
 int (*create_counters)(struct ib_counters *counters,
          struct uverbs_attr_bundle *attrs);
 int (*destroy_counters)(struct ib_counters *counters);
 int (*read_counters)(struct ib_counters *counters,
        struct ib_counters_read_attr *counters_read_attr,
        struct uverbs_attr_bundle *attrs);
 int (*map_mr_sg_pi)(struct ib_mr *mr, struct scatterlist *data_sg,
       int data_sg_nents, unsigned int *data_sg_offset,
       struct scatterlist *meta_sg, int meta_sg_nents,
       unsigned int *meta_sg_offset);







 struct rdma_hw_stats *(*alloc_hw_device_stats)(struct ib_device *device);
 struct rdma_hw_stats *(*alloc_hw_port_stats)(struct ib_device *device,
           u32 port_num);
# 2578 "../include/rdma/ib_verbs.h"
 int (*get_hw_stats)(struct ib_device *device,
       struct rdma_hw_stats *stats, u32 port, int index);






 int (*modify_hw_stat)(struct ib_device *device, u32 port,
         unsigned int counter_index, bool enable);



 int (*fill_res_mr_entry)(struct sk_buff *msg, struct ib_mr *ibmr);
 int (*fill_res_mr_entry_raw)(struct sk_buff *msg, struct ib_mr *ibmr);
 int (*fill_res_cq_entry)(struct sk_buff *msg, struct ib_cq *ibcq);
 int (*fill_res_cq_entry_raw)(struct sk_buff *msg, struct ib_cq *ibcq);
 int (*fill_res_qp_entry)(struct sk_buff *msg, struct ib_qp *ibqp);
 int (*fill_res_qp_entry_raw)(struct sk_buff *msg, struct ib_qp *ibqp);
 int (*fill_res_cm_id_entry)(struct sk_buff *msg, struct rdma_cm_id *id);
 int (*fill_res_srq_entry)(struct sk_buff *msg, struct ib_srq *ib_srq);
 int (*fill_res_srq_entry_raw)(struct sk_buff *msg, struct ib_srq *ib_srq);






 int (*enable_driver)(struct ib_device *dev);



 void (*dealloc_driver)(struct ib_device *dev);


 void (*iw_add_ref)(struct ib_qp *qp);
 void (*iw_rem_ref)(struct ib_qp *qp);
 struct ib_qp *(*iw_get_qp)(struct ib_device *device, int qpn);
 int (*iw_connect)(struct iw_cm_id *cm_id,
     struct iw_cm_conn_param *conn_param);
 int (*iw_accept)(struct iw_cm_id *cm_id,
    struct iw_cm_conn_param *conn_param);
 int (*iw_reject)(struct iw_cm_id *cm_id, const void *pdata,
    u8 pdata_len);
 int (*iw_create_listen)(struct iw_cm_id *cm_id, int backlog);
 int (*iw_destroy_listen)(struct iw_cm_id *cm_id);





 int (*counter_bind_qp)(struct rdma_counter *counter, struct ib_qp *qp);




 int (*counter_unbind_qp)(struct ib_qp *qp);



 int (*counter_dealloc)(struct rdma_counter *counter);




 struct rdma_hw_stats *(*counter_alloc_stats)(
  struct rdma_counter *counter);



 int (*counter_update_stats)(struct rdma_counter *counter);





 int (*fill_stat_mr_entry)(struct sk_buff *msg, struct ib_mr *ibmr);


 int (*query_ucontext)(struct ib_ucontext *context,
         struct uverbs_attr_bundle *attrs);





 int (*get_numa_node)(struct ib_device *dev);




 struct ib_device *(*add_sub_dev)(struct ib_device *parent,
      enum rdma_nl_dev_type type,
      const char *name);




 void (*del_sub_dev)(struct ib_device *sub_dev);

 size_t size_ib_ah;
 size_t size_ib_counters;
 size_t size_ib_cq;
 size_t size_ib_mw;
 size_t size_ib_pd;
 size_t size_ib_qp;
 size_t size_ib_rwq_ind_table;
 size_t size_ib_srq;
 size_t size_ib_ucontext;
 size_t size_ib_xrcd;
};

struct ib_core_device {



 struct device dev;
 possible_net_t rdma_net;
 struct kobject *ports_kobj;
 struct list_head port_list;
 struct ib_device *owner;
};

struct rdma_restrack_root;
struct ib_device {

 struct device *dma_device;
 struct ib_device_ops ops;
 char name[64];
 struct callback_head callback_head;

 struct list_head event_handler_list;

 struct rw_semaphore event_handler_rwsem;


 spinlock_t qp_open_list_lock;

 struct rw_semaphore client_data_rwsem;
 struct xarray client_data;
 struct mutex unregistration_lock;


 rwlock_t cache_lock;



 struct ib_port_data *port_data;

 int num_comp_vectors;

 union {
  struct device dev;
  struct ib_core_device coredev;
 };






 const struct attribute_group *groups[4];

 u64 uverbs_cmd_mask;

 char node_desc[64];
 __be64 node_guid;
 u32 local_dma_lkey;
 u16 is_switch:1;

 u16 kverbs_provider:1;

 u16 use_cq_dim:1;
 u8 node_type;
 u32 phys_port_cnt;
 struct ib_device_attr attrs;
 struct hw_stats_device_data *hw_stats_data;





 u32 index;

 spinlock_t cq_pools_lock;
 struct list_head cq_pools[IB_POLL_LAST_POOL_TYPE + 1];

 struct rdma_restrack_root *res;

 const struct uapi_definition *driver_def;





 refcount_t refcount;
 struct completion unreg_completion;
 struct work_struct unregistration_work;

 const struct rdma_link_ops *link_ops;


 struct mutex compat_devs_mutex;

 struct xarray compat_devs;


 char iw_ifname[16];
 u32 iw_driver_flags;
 u32 lag_flags;


 struct mutex subdev_lock;
 struct list_head subdev_list_head;


 enum rdma_nl_dev_type type;
 struct ib_device *parent;
 struct list_head subdev_list;

 enum rdma_nl_name_assign_type name_assign_type;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *rdma_zalloc_obj(struct ib_device *dev, size_t size,
        gfp_t gfp, bool is_numa_aware)
{
 if (is_numa_aware && dev->ops.get_numa_node)
  return ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(kmalloc_node_noprof(size, (gfp)|(( gfp_t)((((1UL))) << (___GFP_ZERO_BIT))), dev->ops.get_numa_node(dev))) _res = kmalloc_node_noprof(size, (gfp)|(( gfp_t)((((1UL))) << (___GFP_ZERO_BIT))), dev->ops.get_numa_node(dev)); do {} while (0); _res; }); });

 return ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(kzalloc_noprof(size, gfp)) _res = kzalloc_noprof(size, gfp); do {} while (0); _res; }); });
}

struct ib_client_nl_info;
struct ib_client {
 const char *name;
 int (*add)(struct ib_device *ibdev);
 void (*remove)(struct ib_device *, void *client_data);
 void (*rename)(struct ib_device *dev, void *client_data);
 int (*get_nl_info)(struct ib_device *ibdev, void *client_data,
      struct ib_client_nl_info *res);
 int (*get_global_nl_info)(struct ib_client_nl_info *res);
# 2835 "../include/rdma/ib_verbs.h"
 struct net_device *(*get_net_dev_by_params)(
   struct ib_device *dev,
   u32 port,
   u16 pkey,
   const union ib_gid *gid,
   const struct sockaddr *addr,
   void *client_data);

 refcount_t uses;
 struct completion uses_zero;
 u32 client_id;


 u8 no_kverbs_req:1;
};







struct ib_block_iter {

 struct scatterlist *__sg;
 dma_addr_t __dma_addr;
 size_t __sg_numblocks;
 unsigned int __sg_nents;
 unsigned int __sg_advance;
 unsigned int __pg_bit;
};

struct ib_device *_ib_alloc_device(size_t size);






void ib_dealloc_device(struct ib_device *device);

void ib_get_device_fw_str(struct ib_device *device, char *str);

int ib_register_device(struct ib_device *device, const char *name,
         struct device *dma_device);
void ib_unregister_device(struct ib_device *device);
void ib_unregister_driver(enum rdma_driver_id driver_id);
void ib_unregister_device_and_put(struct ib_device *device);
void ib_unregister_device_queued(struct ib_device *ib_dev);

int ib_register_client (struct ib_client *client);
void ib_unregister_client(struct ib_client *client);

void __rdma_block_iter_start(struct ib_block_iter *biter,
        struct scatterlist *sglist,
        unsigned int nents,
        unsigned long pgsz);
bool __rdma_block_iter_next(struct ib_block_iter *biter);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) dma_addr_t
rdma_block_iter_dma_address(struct ib_block_iter *biter)
{
 return biter->__dma_addr & ~(((((1ULL))) << (biter->__pg_bit)) - 1);
}
# 2930 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ib_get_client_data(struct ib_device *device,
           struct ib_client *client)
{
 return xa_load(&device->client_data, client->client_id);
}
void ib_set_client_data(struct ib_device *device, struct ib_client *client,
    void *data);
void ib_set_device_ops(struct ib_device *device,
         const struct ib_device_ops *ops);

int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
        unsigned long pfn, unsigned long size, pgprot_t prot,
        struct rdma_user_mmap_entry *entry);
int rdma_user_mmap_entry_insert(struct ib_ucontext *ucontext,
    struct rdma_user_mmap_entry *entry,
    size_t length);
int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext,
          struct rdma_user_mmap_entry *entry,
          size_t length, u32 min_pgoff,
          u32 max_pgoff);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
rdma_user_mmap_entry_insert_exact(struct ib_ucontext *ucontext,
      struct rdma_user_mmap_entry *entry,
      size_t length, u32 pgoff)
{
 return rdma_user_mmap_entry_insert_range(ucontext, entry, length, pgoff,
       pgoff);
}

struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get_pgoff(struct ib_ucontext *ucontext,
          unsigned long pgoff);
struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
    struct vm_area_struct *vma);
void rdma_user_mmap_entry_put(struct rdma_user_mmap_entry *entry);

void rdma_user_mmap_entry_remove(struct rdma_user_mmap_entry *entry);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_copy_from_udata(void *dest, struct ib_udata *udata, size_t len)
{
 return copy_from_user(dest, udata->inbuf, len) ? -14 : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_copy_to_udata(struct ib_udata *udata, void *src, size_t len)
{
 return copy_to_user(udata->outbuf, src, len) ? -14 : 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_is_buffer_cleared(const void *p,
     size_t len)
{
 bool ret;
 u8 *buf;

 if (len > ((unsigned short)~0U))
  return false;

 buf = memdup_user(p, len);
 if (IS_ERR(buf))
  return false;

 ret = !memchr_inv(buf, 0, len);
 kfree(buf);
 return ret;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_is_udata_cleared(struct ib_udata *udata,
           size_t offset,
           size_t len)
{
 return ib_is_buffer_cleared(udata->inbuf + offset, len);
}
# 3020 "../include/rdma/ib_verbs.h"
bool ib_modify_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state next_state,
   enum ib_qp_type type, enum ib_qp_attr_mask mask);

void ib_register_event_handler(struct ib_event_handler *event_handler);
void ib_unregister_event_handler(struct ib_event_handler *event_handler);
void ib_dispatch_event(const struct ib_event *event);

int ib_query_port(struct ib_device *device,
    u32 port_num, struct ib_port_attr *port_attr);

enum rdma_link_layer rdma_port_get_link_layer(struct ib_device *device,
            u32 port_num);
# 3042 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_switch(const struct ib_device *device)
{
 return device->is_switch;
}
# 3055 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 rdma_start_port(const struct ib_device *device)
{
 return rdma_cap_ib_switch(device) ? 0 : 1;
}
# 3079 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 rdma_end_port(const struct ib_device *device)
{
 return rdma_cap_ib_switch(device) ? 0 : device->phys_port_cnt;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rdma_is_port_valid(const struct ib_device *device,
         unsigned int port)
{
 return (port >= rdma_start_port(device) &&
  port <= rdma_end_port(device));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_is_grh_required(const struct ib_device *device,
     u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        (0x00008000 | 0x00200000 | 0x00800000);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_ib(const struct ib_device *device,
        u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00100000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_roce(const struct ib_device *device,
          u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        (0x00200000 | 0x00800000);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_roce_udp_encap(const struct ib_device *device,
      u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00800000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_roce_eth_encap(const struct ib_device *device,
      u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00200000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_iwarp(const struct ib_device *device,
           u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00400000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_ib_or_roce(const struct ib_device *device,
       u32 port_num)
{
 return rdma_protocol_ib(device, port_num) ||
  rdma_protocol_roce(device, port_num);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_raw_packet(const struct ib_device *device,
         u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x01000000;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_protocol_usnic(const struct ib_device *device,
           u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x02000000;
}
# 3166 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_mad(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00000001;
}
# 3191 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_opa_mad(struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
  0x00000020;
}
# 3217 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_smi(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00000002;
}
# 3238 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_cm(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00000004;
}
# 3256 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_iw_cm(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00000008;
}
# 3277 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_sa(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00000010;
}
# 3300 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_ib_mcast(const struct ib_device *device,
         u32 port_num)
{
 return rdma_cap_ib_sa(device, port_num);
}
# 3319 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_af_ib(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00001000;
}
# 3341 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_eth_ah(const struct ib_device *device, u32 port_num)
{
 return device->port_data[port_num].immutable.core_cap_flags &
        0x00002000;
}
# 3356 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_opa_ah(struct ib_device *device, u32 port_num)
{
 return (device->port_data[port_num].immutable.core_cap_flags &
  0x00004000) == 0x00004000;
}
# 3374 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t rdma_max_mad_size(const struct ib_device *device,
           u32 port_num)
{
 return device->port_data[port_num].immutable.max_mad_size;
}
# 3393 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_roce_gid_table(const struct ib_device *device,
        u32 port_num)
{
 return rdma_protocol_roce(device, port_num) &&
  device->ops.add_gid && device->ops.del_gid;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_cap_read_inv(struct ib_device *dev, u32 port_num)
{




 return rdma_protocol_iwarp(dev, port_num);
}
# 3419 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_core_cap_opa_port(struct ib_device *device,
       u32 port_num)
{
 return (device->port_data[port_num].immutable.core_cap_flags &
  ((0x00100000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000010 | 0x00001000) | 0x00000020)) == ((0x00100000 | 0x00000001 | 0x00000002 | 0x00000004 | 0x00000010 | 0x00001000) | 0x00000020);
}
# 3435 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rdma_mtu_enum_to_int(struct ib_device *device, u32 port,
           int mtu)
{
 if (rdma_core_cap_opa_port(device, port))
  return opa_mtu_enum_to_int((enum opa_mtu)mtu);
 else
  return ib_mtu_enum_to_int((enum ib_mtu)mtu);
}
# 3452 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int rdma_mtu_from_attr(struct ib_device *device, u32 port,
         struct ib_port_attr *attr)
{
 if (rdma_core_cap_opa_port(device, port))
  return attr->phys_mtu;
 else
  return ib_mtu_enum_to_int(attr->max_mtu);
}

int ib_set_vf_link_state(struct ib_device *device, int vf, u32 port,
    int state);
int ib_get_vf_config(struct ib_device *device, int vf, u32 port,
       struct ifla_vf_info *info);
int ib_get_vf_stats(struct ib_device *device, int vf, u32 port,
      struct ifla_vf_stats *stats);
int ib_get_vf_guid(struct ib_device *device, int vf, u32 port,
      struct ifla_vf_guid *node_guid,
      struct ifla_vf_guid *port_guid);
int ib_set_vf_guid(struct ib_device *device, int vf, u32 port, u64 guid,
     int type);

int ib_query_pkey(struct ib_device *device,
    u32 port_num, u16 index, u16 *pkey);

int ib_modify_device(struct ib_device *device,
       int device_modify_mask,
       struct ib_device_modify *device_modify);

int ib_modify_port(struct ib_device *device,
     u32 port_num, int port_modify_mask,
     struct ib_port_modify *port_modify);

int ib_find_gid(struct ib_device *device, union ib_gid *gid,
  u32 *port_num, u16 *index);

int ib_find_pkey(struct ib_device *device,
   u32 port_num, u16 pkey, u16 *index);

enum ib_pd_flags {
# 3500 "../include/rdma/ib_verbs.h"
 IB_PD_UNSAFE_GLOBAL_RKEY = 0x01,
};

struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags,
  const char *caller);
# 3520 "../include/rdma/ib_verbs.h"
int ib_dealloc_pd_user(struct ib_pd *pd, struct ib_udata *udata);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dealloc_pd(struct ib_pd *pd)
{
 int ret = ib_dealloc_pd_user(pd, ((void *)0));

 ({ bool __ret_do_once = !!(ret); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 3532, 9, "Destroy of kernel PD shouldn't fail"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
}

enum rdma_create_ah_flags {

 RDMA_CREATE_AH_SLEEPABLE = ((((1UL))) << (0)),
};
# 3549 "../include/rdma/ib_verbs.h"
struct ib_ah *rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr,
        u32 flags);
# 3564 "../include/rdma/ib_verbs.h"
struct ib_ah *rdma_create_user_ah(struct ib_pd *pd,
      struct rdma_ah_attr *ah_attr,
      struct ib_udata *udata);
# 3575 "../include/rdma/ib_verbs.h"
int ib_get_gids_from_rdma_hdr(const union rdma_network_hdr *hdr,
         enum rdma_network_type net_type,
         union ib_gid *sgid, union ib_gid *dgid);





int ib_get_rdma_header_version(const union rdma_network_hdr *hdr);
# 3603 "../include/rdma/ib_verbs.h"
int ib_init_ah_attr_from_wc(struct ib_device *device, u32 port_num,
       const struct ib_wc *wc, const struct ib_grh *grh,
       struct rdma_ah_attr *ah_attr);
# 3619 "../include/rdma/ib_verbs.h"
struct ib_ah *ib_create_ah_from_wc(struct ib_pd *pd, const struct ib_wc *wc,
       const struct ib_grh *grh, u32 port_num);
# 3629 "../include/rdma/ib_verbs.h"
int rdma_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);
# 3638 "../include/rdma/ib_verbs.h"
int rdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);

enum rdma_destroy_ah_flags {

 RDMA_DESTROY_AH_SLEEPABLE = ((((1UL))) << (0)),
};







int rdma_destroy_ah_user(struct ib_ah *ah, u32 flags, struct ib_udata *udata);
# 3660 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_destroy_ah(struct ib_ah *ah, u32 flags)
{
 int ret = rdma_destroy_ah_user(ah, flags, ((void *)0));

 ({ bool __ret_do_once = !!(ret); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 3664, 9, "Destroy of kernel AH shouldn't fail"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
}

struct ib_srq *ib_create_srq_user(struct ib_pd *pd,
      struct ib_srq_init_attr *srq_init_attr,
      struct ib_usrq_object *uobject,
      struct ib_udata *udata);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_srq *
ib_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *srq_init_attr)
{
 if (!pd->device->ops.create_srq)
  return ERR_PTR(-95);

 return ib_create_srq_user(pd, srq_init_attr, ((void *)0), ((void *)0));
}
# 3692 "../include/rdma/ib_verbs.h"
int ib_modify_srq(struct ib_srq *srq,
    struct ib_srq_attr *srq_attr,
    enum ib_srq_attr_mask srq_attr_mask);







int ib_query_srq(struct ib_srq *srq,
   struct ib_srq_attr *srq_attr);






int ib_destroy_srq_user(struct ib_srq *srq, struct ib_udata *udata);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_destroy_srq(struct ib_srq *srq)
{
 int ret = ib_destroy_srq_user(srq, ((void *)0));

 ({ bool __ret_do_once = !!(ret); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 3722, 9, "Destroy of kernel SRQ shouldn't fail"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
}
# 3732 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_post_srq_recv(struct ib_srq *srq,
       const struct ib_recv_wr *recv_wr,
       const struct ib_recv_wr **bad_recv_wr)
{
 const struct ib_recv_wr *dummy;

 return srq->device->ops.post_srq_recv(srq, recv_wr,
           bad_recv_wr ? : &dummy);
}

struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd,
      struct ib_qp_init_attr *qp_init_attr,
      const char *caller);
# 3753 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_qp *ib_create_qp(struct ib_pd *pd,
      struct ib_qp_init_attr *init_attr)
{
 return ib_create_qp_kernel(pd, init_attr, "ib_core");
}
# 3770 "../include/rdma/ib_verbs.h"
int ib_modify_qp_with_udata(struct ib_qp *qp,
       struct ib_qp_attr *attr,
       int attr_mask,
       struct ib_udata *udata);
# 3784 "../include/rdma/ib_verbs.h"
int ib_modify_qp(struct ib_qp *qp,
   struct ib_qp_attr *qp_attr,
   int qp_attr_mask);
# 3799 "../include/rdma/ib_verbs.h"
int ib_query_qp(struct ib_qp *qp,
  struct ib_qp_attr *qp_attr,
  int qp_attr_mask,
  struct ib_qp_init_attr *qp_init_attr);






int ib_destroy_qp_user(struct ib_qp *qp, struct ib_udata *udata);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_destroy_qp(struct ib_qp *qp)
{
 return ib_destroy_qp_user(qp, ((void *)0));
}
# 3829 "../include/rdma/ib_verbs.h"
struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
    struct ib_qp_open_attr *qp_open_attr);
# 3839 "../include/rdma/ib_verbs.h"
int ib_close_qp(struct ib_qp *qp);
# 3854 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_post_send(struct ib_qp *qp,
          const struct ib_send_wr *send_wr,
          const struct ib_send_wr **bad_send_wr)
{
 const struct ib_send_wr *dummy;

 return qp->device->ops.post_send(qp, send_wr, bad_send_wr ? : &dummy);
}
# 3871 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_post_recv(struct ib_qp *qp,
          const struct ib_recv_wr *recv_wr,
          const struct ib_recv_wr **bad_recv_wr)
{
 const struct ib_recv_wr *dummy;

 return qp->device->ops.post_recv(qp, recv_wr, bad_recv_wr ? : &dummy);
}

struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, int nr_cqe,
       int comp_vector, enum ib_poll_context poll_ctx,
       const char *caller);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_cq *ib_alloc_cq(struct ib_device *dev, void *private,
     int nr_cqe, int comp_vector,
     enum ib_poll_context poll_ctx)
{
 return __ib_alloc_cq(dev, private, nr_cqe, comp_vector, poll_ctx,
        "ib_core");
}

struct ib_cq *__ib_alloc_cq_any(struct ib_device *dev, void *private,
    int nr_cqe, enum ib_poll_context poll_ctx,
    const char *caller);
# 3902 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_cq *ib_alloc_cq_any(struct ib_device *dev,
         void *private, int nr_cqe,
         enum ib_poll_context poll_ctx)
{
 return __ib_alloc_cq_any(dev, private, nr_cqe, poll_ctx,
     "ib_core");
}

void ib_free_cq(struct ib_cq *cq);
int ib_process_cq_direct(struct ib_cq *cq, int budget);
# 3926 "../include/rdma/ib_verbs.h"
struct ib_cq *__ib_create_cq(struct ib_device *device,
        ib_comp_handler comp_handler,
        void (*event_handler)(struct ib_event *, void *),
        void *cq_context,
        const struct ib_cq_init_attr *cq_attr,
        const char *caller);
# 3942 "../include/rdma/ib_verbs.h"
int ib_resize_cq(struct ib_cq *cq, int cqe);
# 3951 "../include/rdma/ib_verbs.h"
int rdma_set_cq_moderation(struct ib_cq *cq, u16 cq_count, u16 cq_period);






int ib_destroy_cq_user(struct ib_cq *cq, struct ib_udata *udata);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_destroy_cq(struct ib_cq *cq)
{
 int ret = ib_destroy_cq_user(cq, ((void *)0));

 ({ bool __ret_do_once = !!(ret); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 3970, 9, "Destroy of kernel CQ shouldn't fail"); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
}
# 3985 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_poll_cq(struct ib_cq *cq, int num_entries,
        struct ib_wc *wc)
{
 return cq->device->ops.poll_cq(cq, num_entries, wc);
}
# 4018 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_req_notify_cq(struct ib_cq *cq,
       enum ib_cq_notify_flags flags)
{
 return cq->device->ops.req_notify_cq(cq, flags);
}

struct ib_cq *ib_cq_pool_get(struct ib_device *dev, unsigned int nr_cqe,
        int comp_vector_hint,
        enum ib_poll_context poll_ctx);

void ib_cq_pool_put(struct ib_cq *cq, unsigned int nr_cqe);






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_uses_virt_dma(struct ib_device *dev)
{
 return 1 && !dev->dma_device;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_dma_pci_p2p_dma_supported(struct ib_device *dev)
{
 if (ib_uses_virt_dma(dev))
  return false;

 return dma_pci_p2pdma_supported(dev->dma_device);
}
# 4058 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *ib_virt_dma_to_ptr(u64 dma_addr)
{

 return (void *)(uintptr_t)dma_addr;
}
# 4071 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct page *ib_virt_dma_to_page(u64 dma_addr)
{
 return (mem_map + ((((((unsigned long)(ib_virt_dma_to_ptr(dma_addr)) - (0xc0000000UL) + __phys_offset)) >> 14)) - (__phys_offset >> 14)));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
{
 if (ib_uses_virt_dma(dev))
  return 0;
 return dma_mapping_error(dev->dma_device, dma_addr);
}
# 4095 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ib_dma_map_single(struct ib_device *dev,
        void *cpu_addr, size_t size,
        enum dma_data_direction direction)
{
 if (ib_uses_virt_dma(dev))
  return (uintptr_t)cpu_addr;
 return dma_map_single_attrs(dev->dma_device, cpu_addr, size, direction, 0);
}
# 4111 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_unmap_single(struct ib_device *dev,
           u64 addr, size_t size,
           enum dma_data_direction direction)
{
 if (!ib_uses_virt_dma(dev))
  dma_unmap_single_attrs(dev->dma_device, addr, size, direction, 0);
}
# 4127 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u64 ib_dma_map_page(struct ib_device *dev,
      struct page *page,
      unsigned long offset,
      size_t size,
      enum dma_data_direction direction)
{
 if (ib_uses_virt_dma(dev))
  return (uintptr_t)(lowmem_page_address(page) + offset);
 return dma_map_page_attrs(dev->dma_device, page, offset, size, direction, 0);
}
# 4145 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_unmap_page(struct ib_device *dev,
         u64 addr, size_t size,
         enum dma_data_direction direction)
{
 if (!ib_uses_virt_dma(dev))
  dma_unmap_page_attrs(dev->dma_device, addr, size, direction, 0);
}

int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents);
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_dma_map_sg_attrs(struct ib_device *dev,
          struct scatterlist *sg, int nents,
          enum dma_data_direction direction,
          unsigned long dma_attrs)
{
 if (ib_uses_virt_dma(dev))
  return ib_dma_virt_map_sg(dev, sg, nents);
 return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
    dma_attrs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_unmap_sg_attrs(struct ib_device *dev,
      struct scatterlist *sg, int nents,
      enum dma_data_direction direction,
      unsigned long dma_attrs)
{
 if (!ib_uses_virt_dma(dev))
  dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction,
       dma_attrs);
}
# 4182 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_dma_map_sgtable_attrs(struct ib_device *dev,
        struct sg_table *sgt,
        enum dma_data_direction direction,
        unsigned long dma_attrs)
{
 int nents;

 if (ib_uses_virt_dma(dev)) {
  nents = ib_dma_virt_map_sg(dev, sgt->sgl, sgt->orig_nents);
  if (!nents)
   return -5;
  sgt->nents = nents;
  return 0;
 }
 return dma_map_sgtable(dev->dma_device, sgt, direction, dma_attrs);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_unmap_sgtable_attrs(struct ib_device *dev,
           struct sg_table *sgt,
           enum dma_data_direction direction,
           unsigned long dma_attrs)
{
 if (!ib_uses_virt_dma(dev))
  dma_unmap_sgtable(dev->dma_device, sgt, direction, dma_attrs);
}
# 4215 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_dma_map_sg(struct ib_device *dev,
    struct scatterlist *sg, int nents,
    enum dma_data_direction direction)
{
 return ib_dma_map_sg_attrs(dev, sg, nents, direction, 0);
}
# 4229 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_unmap_sg(struct ib_device *dev,
       struct scatterlist *sg, int nents,
       enum dma_data_direction direction)
{
 ib_dma_unmap_sg_attrs(dev, sg, nents, direction, 0);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned int ib_dma_max_seg_size(struct ib_device *dev)
{
 if (ib_uses_virt_dma(dev))
  return (~0U);
 return dma_get_max_seg_size(dev->dma_device);
}
# 4256 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_sync_single_for_cpu(struct ib_device *dev,
           u64 addr,
           size_t size,
           enum dma_data_direction dir)
{
 if (!ib_uses_virt_dma(dev))
  dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
}
# 4272 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_dma_sync_single_for_device(struct ib_device *dev,
       u64 addr,
       size_t size,
       enum dma_data_direction dir)
{
 if (!ib_uses_virt_dma(dev))
  dma_sync_single_for_device(dev->dma_device, addr, size, dir);
}




struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
        u64 virt_addr, int mr_access_flags);


int ib_advise_mr(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice,
   u32 flags, struct ib_sge *sg_list, u32 num_sge);
# 4298 "../include/rdma/ib_verbs.h"
int ib_dereg_mr_user(struct ib_mr *mr, struct ib_udata *udata);
# 4309 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_dereg_mr(struct ib_mr *mr)
{
 return ib_dereg_mr_user(mr, ((void *)0));
}

struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
     u32 max_num_sg);

struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd,
        u32 max_num_data_sg,
        u32 max_num_meta_sg);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_update_fast_reg_key(struct ib_mr *mr, u8 newkey)
{
 mr->lkey = (mr->lkey & 0xffffff00) | newkey;
 mr->rkey = (mr->rkey & 0xffffff00) | newkey;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ib_inc_rkey(u32 rkey)
{
 const u32 mask = 0x000000ff;
 return ((rkey + 1) & mask) | (rkey & ~mask);
}
# 4356 "../include/rdma/ib_verbs.h"
int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);







int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);

struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
       struct inode *inode, struct ib_udata *udata);
int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_check_mr_access(struct ib_device *ib_dev,
         unsigned int flags)
{
 u64 device_cap = ib_dev->attrs.device_cap_flags;





 if (flags & (IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_REMOTE_WRITE) &&
     !(flags & IB_ACCESS_LOCAL_WRITE))
  return -22;

 if (flags & ~IB_ACCESS_SUPPORTED)
  return -22;

 if (flags & IB_ACCESS_ON_DEMAND &&
     !(ib_dev->attrs.kernel_cap_flags & IBK_ON_DEMAND_PAGING))
  return -95;

 if ((flags & IB_ACCESS_FLUSH_GLOBAL &&
     !(device_cap & IB_DEVICE_FLUSH_GLOBAL)) ||
     (flags & IB_ACCESS_FLUSH_PERSISTENT &&
     !(device_cap & IB_DEVICE_FLUSH_PERSISTENT)))
  return -95;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_access_writable(int access_flags)
{







 return access_flags &
  (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE |
   IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_MW_BIND);
}
# 4425 "../include/rdma/ib_verbs.h"
int ib_check_mr_status(struct ib_mr *mr, u32 check_mask,
         struct ib_mr_status *mr_status);
# 4441 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_device_try_get(struct ib_device *dev)
{
 return refcount_inc_not_zero(&dev->refcount);
}

void ib_device_put(struct ib_device *device);
struct ib_device *ib_device_get_by_netdev(struct net_device *ndev,
       enum rdma_driver_id driver_id);
struct ib_device *ib_device_get_by_name(const char *name,
     enum rdma_driver_id driver_id);
struct net_device *ib_get_net_dev_by_params(struct ib_device *dev, u32 port,
         u16 pkey, const union ib_gid *gid,
         const struct sockaddr *addr);
int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
    unsigned int port);
struct ib_wq *ib_create_wq(struct ib_pd *pd,
      struct ib_wq_init_attr *init_attr);
int ib_destroy_wq_user(struct ib_wq *wq, struct ib_udata *udata);

int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents,
   unsigned int *sg_offset, unsigned int page_size);
int ib_map_mr_sg_pi(struct ib_mr *mr, struct scatterlist *data_sg,
      int data_sg_nents, unsigned int *data_sg_offset,
      struct scatterlist *meta_sg, int meta_sg_nents,
      unsigned int *meta_sg_offset, unsigned int page_size);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
ib_map_mr_sg_zbva(struct ib_mr *mr, struct scatterlist *sg, int sg_nents,
    unsigned int *sg_offset, unsigned int page_size)
{
 int n;

 n = ib_map_mr_sg(mr, sg, sg_nents, sg_offset, page_size);
 mr->iova = 0;

 return n;
}

int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents,
  unsigned int *sg_offset, int (*set_page)(struct ib_mr *, u64));

void ib_drain_rq(struct ib_qp *qp);
void ib_drain_sq(struct ib_qp *qp);
void ib_drain_qp(struct ib_qp *qp);

int ib_get_eth_speed(struct ib_device *dev, u32 port_num, u16 *speed,
       u8 *width);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 *rdma_ah_retrieve_dmac(struct rdma_ah_attr *attr)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_ROCE)
  return attr->roce.dmac;
 return ((void *)0);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_dlid(struct rdma_ah_attr *attr, u32 dlid)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_IB)
  attr->ib.dlid = (u16)dlid;
 else if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  attr->opa.dlid = dlid;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 rdma_ah_get_dlid(const struct rdma_ah_attr *attr)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_IB)
  return attr->ib.dlid;
 else if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  return attr->opa.dlid;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_sl(struct rdma_ah_attr *attr, u8 sl)
{
 attr->sl = sl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 rdma_ah_get_sl(const struct rdma_ah_attr *attr)
{
 return attr->sl;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_path_bits(struct rdma_ah_attr *attr,
      u8 src_path_bits)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_IB)
  attr->ib.src_path_bits = src_path_bits;
 else if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  attr->opa.src_path_bits = src_path_bits;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 rdma_ah_get_path_bits(const struct rdma_ah_attr *attr)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_IB)
  return attr->ib.src_path_bits;
 else if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  return attr->opa.src_path_bits;
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_make_grd(struct rdma_ah_attr *attr,
     bool make_grd)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  attr->opa.make_grd = make_grd;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_ah_get_make_grd(const struct rdma_ah_attr *attr)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_OPA)
  return attr->opa.make_grd;
 return false;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_port_num(struct rdma_ah_attr *attr, u32 port_num)
{
 attr->port_num = port_num;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 rdma_ah_get_port_num(const struct rdma_ah_attr *attr)
{
 return attr->port_num;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_static_rate(struct rdma_ah_attr *attr,
        u8 static_rate)
{
 attr->static_rate = static_rate;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 rdma_ah_get_static_rate(const struct rdma_ah_attr *attr)
{
 return attr->static_rate;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_ah_flags(struct rdma_ah_attr *attr,
     enum ib_ah_flags flag)
{
 attr->ah_flags = flag;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum ib_ah_flags
  rdma_ah_get_ah_flags(const struct rdma_ah_attr *attr)
{
 return attr->ah_flags;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct ib_global_route
  *rdma_ah_read_grh(const struct rdma_ah_attr *attr)
{
 return &attr->grh;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_global_route
  *rdma_ah_retrieve_grh(struct rdma_ah_attr *attr)
{
 return &attr->grh;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_dgid_raw(struct rdma_ah_attr *attr, void *dgid)
{
 struct ib_global_route *grh = rdma_ah_retrieve_grh(attr);

 memcpy(grh->dgid.raw, dgid, sizeof(grh->dgid));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_subnet_prefix(struct rdma_ah_attr *attr,
          __be64 prefix)
{
 struct ib_global_route *grh = rdma_ah_retrieve_grh(attr);

 grh->dgid.global.subnet_prefix = prefix;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_interface_id(struct rdma_ah_attr *attr,
         __be64 if_id)
{
 struct ib_global_route *grh = rdma_ah_retrieve_grh(attr);

 grh->dgid.global.interface_id = if_id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void rdma_ah_set_grh(struct rdma_ah_attr *attr,
       union ib_gid *dgid, u32 flow_label,
       u8 sgid_index, u8 hop_limit,
       u8 traffic_class)
{
 struct ib_global_route *grh = rdma_ah_retrieve_grh(attr);

 attr->ah_flags = IB_AH_GRH;
 if (dgid)
  grh->dgid = *dgid;
 grh->flow_label = flow_label;
 grh->sgid_index = sgid_index;
 grh->hop_limit = hop_limit;
 grh->traffic_class = traffic_class;
 grh->sgid_attr = ((void *)0);
}

void rdma_destroy_ah_attr(struct rdma_ah_attr *ah_attr);
void rdma_move_grh_sgid_attr(struct rdma_ah_attr *attr, union ib_gid *dgid,
        u32 flow_label, u8 hop_limit, u8 traffic_class,
        const struct ib_gid_attr *sgid_attr);
void rdma_copy_ah_attr(struct rdma_ah_attr *dest,
         const struct rdma_ah_attr *src);
void rdma_replace_ah_attr(struct rdma_ah_attr *old,
     const struct rdma_ah_attr *new);
void rdma_move_ah_attr(struct rdma_ah_attr *dest, struct rdma_ah_attr *src);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) enum rdma_ah_attr_type rdma_ah_find_type(struct ib_device *dev,
             u32 port_num)
{
 if (rdma_protocol_roce(dev, port_num))
  return RDMA_AH_ATTR_TYPE_ROCE;
 if (rdma_protocol_ib(dev, port_num)) {
  if (rdma_cap_opa_ah(dev, port_num))
   return RDMA_AH_ATTR_TYPE_OPA;
  return RDMA_AH_ATTR_TYPE_IB;
 }
 if (dev->type == RDMA_DEVICE_TYPE_SMI)
  return RDMA_AH_ATTR_TYPE_IB;

 return RDMA_AH_ATTR_TYPE_UNDEFINED;
}
# 4682 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 ib_lid_cpu16(u32 lid)
{
 ({ bool __ret_do_once = !!(lid & 0xFFFF0000); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 4684, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return (u16)lid;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __be16 ib_lid_be16(u32 lid)
{
 ({ bool __ret_do_once = !!(lid & 0xFFFF0000); if (({ static bool __attribute__((__section__(".data.once"))) __already_done; bool __ret_cond = !!(__ret_do_once); bool __ret_once = false; if (__builtin_expect(!!(__ret_cond && !__already_done), 0)) { __already_done = true; __ret_once = true; } __builtin_expect(!!(__ret_once), 0); })) ({ int __ret_warn_on = !!(1); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/ib_verbs.h", 4695, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); __builtin_expect(!!(__ret_do_once), 0); });
 return (( __be16)(__u16)(__builtin_constant_p(((u16)lid)) ? ((__u16)( (((__u16)(((u16)lid)) & (__u16)0x00ffU) << 8) | (((__u16)(((u16)lid)) & (__u16)0xff00U) >> 8))) : __fswab16(((u16)lid))));
}
# 4709 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct cpumask *
ib_get_vector_affinity(struct ib_device *device, int comp_vector)
{
 if (comp_vector < 0 || comp_vector >= device->num_comp_vectors ||
     !device->ops.get_vector_affinity)
  return ((void *)0);

 return device->ops.get_vector_affinity(device, comp_vector);

}







void rdma_roce_rescan_device(struct ib_device *ibdev);

struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile);

int uverbs_destroy_def_handler(struct uverbs_attr_bundle *attrs);

struct net_device *rdma_alloc_netdev(struct ib_device *device, u32 port_num,
         enum rdma_netdev_t type, const char *name,
         unsigned char name_assign_type,
         void (*setup)(struct net_device *));

int rdma_init_netdev(struct ib_device *device, u32 port_num,
       enum rdma_netdev_t type, const char *name,
       unsigned char name_assign_type,
       void (*setup)(struct net_device *),
       struct net_device *netdev);
# 4751 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_device *rdma_device_to_ibdev(struct device *device)
{
 struct ib_core_device *coredev =
  ({ void *__mptr = (void *)(device); _Static_assert(__builtin_types_compatible_p(typeof(*(device)), typeof(((struct ib_core_device *)0)->dev)) || __builtin_types_compatible_p(typeof(*(device)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_core_device *)(__mptr - __builtin_offsetof(struct ib_core_device, dev))); });

 return coredev->owner;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ibdev_to_node(struct ib_device *ibdev)
{
 struct device *parent = ibdev->dev.parent;

 if (!parent)
  return (-1);
 return dev_to_node(parent);
}
# 4783 "../include/rdma/ib_verbs.h"
bool rdma_dev_access_netns(const struct ib_device *device,
      const struct net *net);
# 4798 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 rdma_flow_label_to_udp_sport(u32 fl)
{
 u32 fl_low = fl & 0x03fff, fl_high = fl & 0xFC000;

 fl_low ^= fl_high >> 14;
 return (u16)(fl_low | (0xC000));
}
# 4821 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 rdma_calc_flow_label(u32 lqpn, u32 rqpn)
{
 u64 v = (u64)lqpn * rqpn;

 v ^= v >> 20;
 v ^= v >> 40;

 return (u32)(v & (0x000FFFFF));
}
# 4840 "../include/rdma/ib_verbs.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u16 rdma_get_udp_sport(u32 fl, u32 lqpn, u32 rqpn)
{
 if (!fl)
  fl = rdma_calc_flow_label(lqpn, rqpn);

 return rdma_flow_label_to_udp_sport(fl);
}

const struct ib_port_immutable*
ib_port_immutable_read(struct ib_device *dev, unsigned int port);
# 4860 "../include/rdma/ib_verbs.h"
int ib_add_sub_device(struct ib_device *parent,
        enum rdma_nl_dev_type type,
        const char *name);
# 4871 "../include/rdma/ib_verbs.h"
int ib_del_sub_device_and_put(struct ib_device *sub);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_mark_name_assigned_by_user(struct ib_device *ibdev)
{
 ibdev->name_assign_type = RDMA_NAME_ASSIGN_TYPE_USER;
}
# 47 "../drivers/infiniband/core/uverbs.h" 2
# 1 "../include/rdma/ib_umem.h" 1
# 15 "../include/rdma/ib_umem.h"
struct ib_ucontext;
struct ib_umem_odp;
struct dma_buf_attach_ops;

struct ib_umem {
 struct ib_device *ibdev;
 struct mm_struct *owning_mm;
 u64 iova;
 size_t length;
 unsigned long address;
 u32 writable : 1;
 u32 is_odp : 1;
 u32 is_dmabuf : 1;
 struct sg_append_table sgt_append;
};

struct ib_umem_dmabuf {
 struct ib_umem umem;
 struct dma_buf_attachment *attach;
 struct sg_table *sgt;
 struct scatterlist *first_sg;
 struct scatterlist *last_sg;
 unsigned long first_sg_offset;
 unsigned long last_sg_trim;
 void *private;
 u8 pinned : 1;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_umem_dmabuf *to_ib_umem_dmabuf(struct ib_umem *umem)
{
 return ({ void *__mptr = (void *)(umem); _Static_assert(__builtin_types_compatible_p(typeof(*(umem)), typeof(((struct ib_umem_dmabuf *)0)->umem)) || __builtin_types_compatible_p(typeof(*(umem)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_umem_dmabuf *)(__mptr - __builtin_offsetof(struct ib_umem_dmabuf, umem))); });
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_umem_offset(struct ib_umem *umem)
{
 return umem->address & ~(~((1 << 14) - 1));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long ib_umem_dma_offset(struct ib_umem *umem,
            unsigned long pgsz)
{
 return (((umem->sgt_append.sgt.sgl)->dma_address) + ib_umem_offset(umem)) &
        (pgsz - 1);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t ib_umem_num_dma_blocks(struct ib_umem *umem,
         unsigned long pgsz)
{
 return (size_t)((((((umem->iova + umem->length)) + ((__typeof__((umem->iova + umem->length)))((pgsz)) - 1)) & ~((__typeof__((umem->iova + umem->length)))((pgsz)) - 1)) -
    ((((umem->iova) - ((pgsz) - 1)) + ((__typeof__((umem->iova) - ((pgsz) - 1)))((pgsz)) - 1)) & ~((__typeof__((umem->iova) - ((pgsz) - 1)))((pgsz)) - 1)))) /
        pgsz;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t ib_umem_num_pages(struct ib_umem *umem)
{
 return ib_umem_num_dma_blocks(umem, (1UL << 14));
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void __rdma_umem_block_iter_start(struct ib_block_iter *biter,
      struct ib_umem *umem,
      unsigned long pgsz)
{
 __rdma_block_iter_start(biter, umem->sgt_append.sgt.sgl,
    umem->sgt_append.sgt.nents, pgsz);
 biter->__sg_advance = ib_umem_offset(umem) & ~(pgsz - 1);
 biter->__sg_numblocks = ib_umem_num_dma_blocks(umem, pgsz);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool __rdma_umem_block_iter_next(struct ib_block_iter *biter)
{
 return __rdma_block_iter_next(biter) && biter->__sg_numblocks--;
}
# 161 "../include/rdma/ib_umem.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_umem *ib_umem_get(struct ib_device *device,
       unsigned long addr, size_t size,
       int access)
{
 return ERR_PTR(-95);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_umem_release(struct ib_umem *umem) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset,
              size_t length) {
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
         unsigned long pgsz_bitmap,
         unsigned long virt)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) unsigned long ib_umem_find_best_pgoff(struct ib_umem *umem,
          unsigned long pgsz_bitmap,
          u64 pgoff_bitmask)
{
 return 0;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__))
struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device,
       unsigned long offset,
       size_t size, int fd,
       int access,
       struct dma_buf_attach_ops *ops)
{
 return ERR_PTR(-95);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_umem_dmabuf *
ib_umem_dmabuf_get_pinned(struct ib_device *device, unsigned long offset,
     size_t size, int fd, int access)
{
 return ERR_PTR(-95);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
{
 return -95;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf) { }
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf) { }
# 48 "../drivers/infiniband/core/uverbs.h" 2

# 1 "../include/rdma/uverbs_std_types.h" 1








# 1 "../include/rdma/uverbs_types.h" 1
# 12 "../include/rdma/uverbs_types.h"
struct uverbs_obj_type;
struct uverbs_api_object;

enum rdma_lookup_mode {
 UVERBS_LOOKUP_READ,
 UVERBS_LOOKUP_WRITE,





 UVERBS_LOOKUP_DESTROY,
};
# 57 "../include/rdma/uverbs_types.h"
struct uverbs_obj_type_class {
 struct ib_uobject *(*alloc_begin)(const struct uverbs_api_object *obj,
       struct uverbs_attr_bundle *attrs);

 void (*alloc_commit)(struct ib_uobject *uobj);

 void (*alloc_abort)(struct ib_uobject *uobj);

 struct ib_uobject *(*lookup_get)(const struct uverbs_api_object *obj,
      struct ib_uverbs_file *ufile, s64 id,
      enum rdma_lookup_mode mode);
 void (*lookup_put)(struct ib_uobject *uobj, enum rdma_lookup_mode mode);

 int __attribute__((__warn_unused_result__)) (*destroy_hw)(struct ib_uobject *uobj,
           enum rdma_remove_reason why,
           struct uverbs_attr_bundle *attrs);
 void (*remove_handle)(struct ib_uobject *uobj);
 void (*swap_uobjects)(struct ib_uobject *obj_old,
         struct ib_uobject *obj_new);
};

struct uverbs_obj_type {
 const struct uverbs_obj_type_class * const type_class;
 size_t obj_size;
};
# 90 "../include/rdma/uverbs_types.h"
struct uverbs_obj_idr_type {





 struct uverbs_obj_type type;






 int __attribute__((__warn_unused_result__)) (*destroy_object)(struct ib_uobject *uobj,
        enum rdma_remove_reason why,
        struct uverbs_attr_bundle *attrs);
};

struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_api_object *obj,
        struct ib_uverbs_file *ufile, s64 id,
        enum rdma_lookup_mode mode,
        struct uverbs_attr_bundle *attrs);
void rdma_lookup_put_uobject(struct ib_uobject *uobj,
        enum rdma_lookup_mode mode);
struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_api_object *obj,
         struct uverbs_attr_bundle *attrs);
void rdma_alloc_abort_uobject(struct ib_uobject *uobj,
         struct uverbs_attr_bundle *attrs,
         bool hw_obj_valid);
void rdma_alloc_commit_uobject(struct ib_uobject *uobj,
          struct uverbs_attr_bundle *attrs);
void rdma_assign_uobject(struct ib_uobject *to_uobj,
    struct ib_uobject *new_uobj,
    struct uverbs_attr_bundle *attrs);







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uverbs_uobject_get(struct ib_uobject *uobject)
{
 kref_get(&uobject->ref);
}
void uverbs_uobject_put(struct ib_uobject *uobject);

struct uverbs_obj_fd_type {







 struct uverbs_obj_type type;
 void (*destroy_object)(struct ib_uobject *uobj,
          enum rdma_remove_reason why);
 const struct file_operations *fops;
 const char *name;
 int flags;
};

extern const struct uverbs_obj_type_class uverbs_idr_class;
extern const struct uverbs_obj_type_class uverbs_fd_class;
int uverbs_uobject_fd_release(struct inode *inode, struct file *filp);
# 10 "../include/rdma/uverbs_std_types.h" 2
# 1 "../include/rdma/uverbs_ioctl.h" 1
# 13 "../include/rdma/uverbs_ioctl.h"
# 1 "../include/uapi/rdma/ib_user_ioctl_cmds.h" 1
# 40 "../include/uapi/rdma/ib_user_ioctl_cmds.h"
enum uverbs_default_objects {
 UVERBS_OBJECT_DEVICE,
 UVERBS_OBJECT_PD,
 UVERBS_OBJECT_COMP_CHANNEL,
 UVERBS_OBJECT_CQ,
 UVERBS_OBJECT_QP,
 UVERBS_OBJECT_SRQ,
 UVERBS_OBJECT_AH,
 UVERBS_OBJECT_MR,
 UVERBS_OBJECT_MW,
 UVERBS_OBJECT_FLOW,
 UVERBS_OBJECT_XRCD,
 UVERBS_OBJECT_RWQ_IND_TBL,
 UVERBS_OBJECT_WQ,
 UVERBS_OBJECT_FLOW_ACTION,
 UVERBS_OBJECT_DM,
 UVERBS_OBJECT_COUNTERS,
 UVERBS_OBJECT_ASYNC_EVENT,
};

enum {
 UVERBS_ID_DRIVER_NS = 1UL << 12,
 UVERBS_ATTR_UHW_IN = UVERBS_ID_DRIVER_NS,
 UVERBS_ATTR_UHW_OUT,
 UVERBS_ID_DRIVER_NS_WITH_UHW,
};

enum uverbs_methods_device {
 UVERBS_METHOD_INVOKE_WRITE,
 UVERBS_METHOD_INFO_HANDLES,
 UVERBS_METHOD_QUERY_PORT,
 UVERBS_METHOD_GET_CONTEXT,
 UVERBS_METHOD_QUERY_CONTEXT,
 UVERBS_METHOD_QUERY_GID_TABLE,
 UVERBS_METHOD_QUERY_GID_ENTRY,
};

enum uverbs_attrs_invoke_write_cmd_attr_ids {
 UVERBS_ATTR_CORE_IN,
 UVERBS_ATTR_CORE_OUT,
 UVERBS_ATTR_WRITE_CMD,
};

enum uverbs_attrs_query_port_cmd_attr_ids {
 UVERBS_ATTR_QUERY_PORT_PORT_NUM,
 UVERBS_ATTR_QUERY_PORT_RESP,
};

enum uverbs_attrs_get_context_attr_ids {
 UVERBS_ATTR_GET_CONTEXT_NUM_COMP_VECTORS,
 UVERBS_ATTR_GET_CONTEXT_CORE_SUPPORT,
};

enum uverbs_attrs_query_context_attr_ids {
 UVERBS_ATTR_QUERY_CONTEXT_NUM_COMP_VECTORS,
 UVERBS_ATTR_QUERY_CONTEXT_CORE_SUPPORT,
};

enum uverbs_attrs_create_cq_cmd_attr_ids {
 UVERBS_ATTR_CREATE_CQ_HANDLE,
 UVERBS_ATTR_CREATE_CQ_CQE,
 UVERBS_ATTR_CREATE_CQ_USER_HANDLE,
 UVERBS_ATTR_CREATE_CQ_COMP_CHANNEL,
 UVERBS_ATTR_CREATE_CQ_COMP_VECTOR,
 UVERBS_ATTR_CREATE_CQ_FLAGS,
 UVERBS_ATTR_CREATE_CQ_RESP_CQE,
 UVERBS_ATTR_CREATE_CQ_EVENT_FD,
};

enum uverbs_attrs_destroy_cq_cmd_attr_ids {
 UVERBS_ATTR_DESTROY_CQ_HANDLE,
 UVERBS_ATTR_DESTROY_CQ_RESP,
};

enum uverbs_attrs_create_flow_action_esp {
 UVERBS_ATTR_CREATE_FLOW_ACTION_ESP_HANDLE,
 UVERBS_ATTR_FLOW_ACTION_ESP_ATTRS,
 UVERBS_ATTR_FLOW_ACTION_ESP_ESN,
 UVERBS_ATTR_FLOW_ACTION_ESP_KEYMAT,
 UVERBS_ATTR_FLOW_ACTION_ESP_REPLAY,
 UVERBS_ATTR_FLOW_ACTION_ESP_ENCAP,
};

enum uverbs_attrs_modify_flow_action_esp {
 UVERBS_ATTR_MODIFY_FLOW_ACTION_ESP_HANDLE =
  UVERBS_ATTR_CREATE_FLOW_ACTION_ESP_HANDLE,
};

enum uverbs_attrs_destroy_flow_action_esp {
 UVERBS_ATTR_DESTROY_FLOW_ACTION_HANDLE,
};

enum uverbs_attrs_create_qp_cmd_attr_ids {
 UVERBS_ATTR_CREATE_QP_HANDLE,
 UVERBS_ATTR_CREATE_QP_XRCD_HANDLE,
 UVERBS_ATTR_CREATE_QP_PD_HANDLE,
 UVERBS_ATTR_CREATE_QP_SRQ_HANDLE,
 UVERBS_ATTR_CREATE_QP_SEND_CQ_HANDLE,
 UVERBS_ATTR_CREATE_QP_RECV_CQ_HANDLE,
 UVERBS_ATTR_CREATE_QP_IND_TABLE_HANDLE,
 UVERBS_ATTR_CREATE_QP_USER_HANDLE,
 UVERBS_ATTR_CREATE_QP_CAP,
 UVERBS_ATTR_CREATE_QP_TYPE,
 UVERBS_ATTR_CREATE_QP_FLAGS,
 UVERBS_ATTR_CREATE_QP_SOURCE_QPN,
 UVERBS_ATTR_CREATE_QP_EVENT_FD,
 UVERBS_ATTR_CREATE_QP_RESP_CAP,
 UVERBS_ATTR_CREATE_QP_RESP_QP_NUM,
};

enum uverbs_attrs_destroy_qp_cmd_attr_ids {
 UVERBS_ATTR_DESTROY_QP_HANDLE,
 UVERBS_ATTR_DESTROY_QP_RESP,
};

enum uverbs_methods_qp {
 UVERBS_METHOD_QP_CREATE,
 UVERBS_METHOD_QP_DESTROY,
};

enum uverbs_attrs_create_srq_cmd_attr_ids {
 UVERBS_ATTR_CREATE_SRQ_HANDLE,
 UVERBS_ATTR_CREATE_SRQ_PD_HANDLE,
 UVERBS_ATTR_CREATE_SRQ_XRCD_HANDLE,
 UVERBS_ATTR_CREATE_SRQ_CQ_HANDLE,
 UVERBS_ATTR_CREATE_SRQ_USER_HANDLE,
 UVERBS_ATTR_CREATE_SRQ_MAX_WR,
 UVERBS_ATTR_CREATE_SRQ_MAX_SGE,
 UVERBS_ATTR_CREATE_SRQ_LIMIT,
 UVERBS_ATTR_CREATE_SRQ_MAX_NUM_TAGS,
 UVERBS_ATTR_CREATE_SRQ_TYPE,
 UVERBS_ATTR_CREATE_SRQ_EVENT_FD,
 UVERBS_ATTR_CREATE_SRQ_RESP_MAX_WR,
 UVERBS_ATTR_CREATE_SRQ_RESP_MAX_SGE,
 UVERBS_ATTR_CREATE_SRQ_RESP_SRQ_NUM,
};

enum uverbs_attrs_destroy_srq_cmd_attr_ids {
 UVERBS_ATTR_DESTROY_SRQ_HANDLE,
 UVERBS_ATTR_DESTROY_SRQ_RESP,
};

enum uverbs_methods_srq {
 UVERBS_METHOD_SRQ_CREATE,
 UVERBS_METHOD_SRQ_DESTROY,
};

enum uverbs_methods_cq {
 UVERBS_METHOD_CQ_CREATE,
 UVERBS_METHOD_CQ_DESTROY,
};

enum uverbs_attrs_create_wq_cmd_attr_ids {
 UVERBS_ATTR_CREATE_WQ_HANDLE,
 UVERBS_ATTR_CREATE_WQ_PD_HANDLE,
 UVERBS_ATTR_CREATE_WQ_CQ_HANDLE,
 UVERBS_ATTR_CREATE_WQ_USER_HANDLE,
 UVERBS_ATTR_CREATE_WQ_TYPE,
 UVERBS_ATTR_CREATE_WQ_EVENT_FD,
 UVERBS_ATTR_CREATE_WQ_MAX_WR,
 UVERBS_ATTR_CREATE_WQ_MAX_SGE,
 UVERBS_ATTR_CREATE_WQ_FLAGS,
 UVERBS_ATTR_CREATE_WQ_RESP_MAX_WR,
 UVERBS_ATTR_CREATE_WQ_RESP_MAX_SGE,
 UVERBS_ATTR_CREATE_WQ_RESP_WQ_NUM,
};

enum uverbs_attrs_destroy_wq_cmd_attr_ids {
 UVERBS_ATTR_DESTROY_WQ_HANDLE,
 UVERBS_ATTR_DESTROY_WQ_RESP,
};

enum uverbs_methods_wq {
 UVERBS_METHOD_WQ_CREATE,
 UVERBS_METHOD_WQ_DESTROY,
};

enum uverbs_methods_actions_flow_action_ops {
 UVERBS_METHOD_FLOW_ACTION_ESP_CREATE,
 UVERBS_METHOD_FLOW_ACTION_DESTROY,
 UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY,
};

enum uverbs_attrs_alloc_dm_cmd_attr_ids {
 UVERBS_ATTR_ALLOC_DM_HANDLE,
 UVERBS_ATTR_ALLOC_DM_LENGTH,
 UVERBS_ATTR_ALLOC_DM_ALIGNMENT,
};

enum uverbs_attrs_free_dm_cmd_attr_ids {
 UVERBS_ATTR_FREE_DM_HANDLE,
};

enum uverbs_methods_dm {
 UVERBS_METHOD_DM_ALLOC,
 UVERBS_METHOD_DM_FREE,
};

enum uverbs_attrs_reg_dm_mr_cmd_attr_ids {
 UVERBS_ATTR_REG_DM_MR_HANDLE,
 UVERBS_ATTR_REG_DM_MR_OFFSET,
 UVERBS_ATTR_REG_DM_MR_LENGTH,
 UVERBS_ATTR_REG_DM_MR_PD_HANDLE,
 UVERBS_ATTR_REG_DM_MR_ACCESS_FLAGS,
 UVERBS_ATTR_REG_DM_MR_DM_HANDLE,
 UVERBS_ATTR_REG_DM_MR_RESP_LKEY,
 UVERBS_ATTR_REG_DM_MR_RESP_RKEY,
};

enum uverbs_methods_mr {
 UVERBS_METHOD_DM_MR_REG,
 UVERBS_METHOD_MR_DESTROY,
 UVERBS_METHOD_ADVISE_MR,
 UVERBS_METHOD_QUERY_MR,
 UVERBS_METHOD_REG_DMABUF_MR,
};

enum uverbs_attrs_mr_destroy_ids {
 UVERBS_ATTR_DESTROY_MR_HANDLE,
};

enum uverbs_attrs_advise_mr_cmd_attr_ids {
 UVERBS_ATTR_ADVISE_MR_PD_HANDLE,
 UVERBS_ATTR_ADVISE_MR_ADVICE,
 UVERBS_ATTR_ADVISE_MR_FLAGS,
 UVERBS_ATTR_ADVISE_MR_SGE_LIST,
};

enum uverbs_attrs_query_mr_cmd_attr_ids {
 UVERBS_ATTR_QUERY_MR_HANDLE,
 UVERBS_ATTR_QUERY_MR_RESP_LKEY,
 UVERBS_ATTR_QUERY_MR_RESP_RKEY,
 UVERBS_ATTR_QUERY_MR_RESP_LENGTH,
 UVERBS_ATTR_QUERY_MR_RESP_IOVA,
};

enum uverbs_attrs_reg_dmabuf_mr_cmd_attr_ids {
 UVERBS_ATTR_REG_DMABUF_MR_HANDLE,
 UVERBS_ATTR_REG_DMABUF_MR_PD_HANDLE,
 UVERBS_ATTR_REG_DMABUF_MR_OFFSET,
 UVERBS_ATTR_REG_DMABUF_MR_LENGTH,
 UVERBS_ATTR_REG_DMABUF_MR_IOVA,
 UVERBS_ATTR_REG_DMABUF_MR_FD,
 UVERBS_ATTR_REG_DMABUF_MR_ACCESS_FLAGS,
 UVERBS_ATTR_REG_DMABUF_MR_RESP_LKEY,
 UVERBS_ATTR_REG_DMABUF_MR_RESP_RKEY,
};

enum uverbs_attrs_create_counters_cmd_attr_ids {
 UVERBS_ATTR_CREATE_COUNTERS_HANDLE,
};

enum uverbs_attrs_destroy_counters_cmd_attr_ids {
 UVERBS_ATTR_DESTROY_COUNTERS_HANDLE,
};

enum uverbs_attrs_read_counters_cmd_attr_ids {
 UVERBS_ATTR_READ_COUNTERS_HANDLE,
 UVERBS_ATTR_READ_COUNTERS_BUFF,
 UVERBS_ATTR_READ_COUNTERS_FLAGS,
};

enum uverbs_methods_actions_counters_ops {
 UVERBS_METHOD_COUNTERS_CREATE,
 UVERBS_METHOD_COUNTERS_DESTROY,
 UVERBS_METHOD_COUNTERS_READ,
};

enum uverbs_attrs_info_handles_id {
 UVERBS_ATTR_INFO_OBJECT_ID,
 UVERBS_ATTR_INFO_TOTAL_HANDLES,
 UVERBS_ATTR_INFO_HANDLES_LIST,
};

enum uverbs_methods_pd {
 UVERBS_METHOD_PD_DESTROY,
};

enum uverbs_attrs_pd_destroy_ids {
 UVERBS_ATTR_DESTROY_PD_HANDLE,
};

enum uverbs_methods_mw {
 UVERBS_METHOD_MW_DESTROY,
};

enum uverbs_attrs_mw_destroy_ids {
 UVERBS_ATTR_DESTROY_MW_HANDLE,
};

enum uverbs_methods_xrcd {
 UVERBS_METHOD_XRCD_DESTROY,
};

enum uverbs_attrs_xrcd_destroy_ids {
 UVERBS_ATTR_DESTROY_XRCD_HANDLE,
};

enum uverbs_methods_ah {
 UVERBS_METHOD_AH_DESTROY,
};

enum uverbs_attrs_ah_destroy_ids {
 UVERBS_ATTR_DESTROY_AH_HANDLE,
};

enum uverbs_methods_rwq_ind_tbl {
 UVERBS_METHOD_RWQ_IND_TBL_DESTROY,
};

enum uverbs_attrs_rwq_ind_tbl_destroy_ids {
 UVERBS_ATTR_DESTROY_RWQ_IND_TBL_HANDLE,
};

enum uverbs_methods_flow {
 UVERBS_METHOD_FLOW_DESTROY,
};

enum uverbs_attrs_flow_destroy_ids {
 UVERBS_ATTR_DESTROY_FLOW_HANDLE,
};

enum uverbs_method_async_event {
 UVERBS_METHOD_ASYNC_EVENT_ALLOC,
};

enum uverbs_attrs_async_event_create {
 UVERBS_ATTR_ASYNC_EVENT_ALLOC_FD_HANDLE,
};

enum uverbs_attrs_query_gid_table_cmd_attr_ids {
 UVERBS_ATTR_QUERY_GID_TABLE_ENTRY_SIZE,
 UVERBS_ATTR_QUERY_GID_TABLE_FLAGS,
 UVERBS_ATTR_QUERY_GID_TABLE_RESP_ENTRIES,
 UVERBS_ATTR_QUERY_GID_TABLE_RESP_NUM_ENTRIES,
};

enum uverbs_attrs_query_gid_entry_cmd_attr_ids {
 UVERBS_ATTR_QUERY_GID_ENTRY_PORT,
 UVERBS_ATTR_QUERY_GID_ENTRY_GID_INDEX,
 UVERBS_ATTR_QUERY_GID_ENTRY_FLAGS,
 UVERBS_ATTR_QUERY_GID_ENTRY_RESP_ENTRY,
};
# 14 "../include/rdma/uverbs_ioctl.h" 2







enum uverbs_attr_type {
 UVERBS_ATTR_TYPE_NA,
 UVERBS_ATTR_TYPE_PTR_IN,
 UVERBS_ATTR_TYPE_PTR_OUT,
 UVERBS_ATTR_TYPE_IDR,
 UVERBS_ATTR_TYPE_FD,
 UVERBS_ATTR_TYPE_RAW_FD,
 UVERBS_ATTR_TYPE_ENUM_IN,
 UVERBS_ATTR_TYPE_IDRS_ARRAY,
};

enum uverbs_obj_access {
 UVERBS_ACCESS_READ,
 UVERBS_ACCESS_WRITE,
 UVERBS_ACCESS_NEW,
 UVERBS_ACCESS_DESTROY
};



struct uverbs_attr_spec {
 u8 type;






 u8 zero_trailing:1;




 u8 alloc_and_copy:1;
 u8 mandatory:1;

 u8 is_udata:1;

 union {
  struct {

   u16 len;

   u16 min_len;
  } ptr;

  struct {




   u16 obj_type;
   u8 access;
  } obj;

  struct {
   u8 num_elems;
  } enum_def;
 } u;


 union {
  struct {





   const struct uverbs_attr_spec *ids;
  } enum_def;

  struct {




   u16 obj_type;
   u16 min_len;
   u16 max_len;
   u8 access;
  } objs_arr;
 } u2;
};
# 127 "../include/rdma/uverbs_ioctl.h"
enum uapi_radix_data {
 UVERBS_API_NS_FLAG = 1U << 12,

 UVERBS_API_ATTR_KEY_BITS = 6,
 UVERBS_API_ATTR_KEY_MASK = ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((0) > (UVERBS_API_ATTR_KEY_BITS - 1)) * 0l)) : (int *)8))), (0) > (UVERBS_API_ATTR_KEY_BITS - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (0)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (UVERBS_API_ATTR_KEY_BITS - 1))))),
 UVERBS_API_ATTR_BKEY_LEN = (1 << UVERBS_API_ATTR_KEY_BITS) - 1,
 UVERBS_API_WRITE_KEY_NUM = 1 << UVERBS_API_ATTR_KEY_BITS,

 UVERBS_API_METHOD_KEY_BITS = 5,
 UVERBS_API_METHOD_KEY_SHIFT = UVERBS_API_ATTR_KEY_BITS,
 UVERBS_API_METHOD_KEY_NUM_CORE = 22,
 UVERBS_API_METHOD_IS_WRITE = 30 << UVERBS_API_METHOD_KEY_SHIFT,
 UVERBS_API_METHOD_IS_WRITE_EX = 31 << UVERBS_API_METHOD_KEY_SHIFT,
 UVERBS_API_METHOD_KEY_NUM_DRIVER =
  (UVERBS_API_METHOD_IS_WRITE >> UVERBS_API_METHOD_KEY_SHIFT) -
  UVERBS_API_METHOD_KEY_NUM_CORE,
 UVERBS_API_METHOD_KEY_MASK = ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((UVERBS_API_METHOD_KEY_SHIFT) > (UVERBS_API_METHOD_KEY_BITS + UVERBS_API_METHOD_KEY_SHIFT - 1)) * 0l)) : (int *)8))), (UVERBS_API_METHOD_KEY_SHIFT) > (UVERBS_API_METHOD_KEY_BITS + UVERBS_API_METHOD_KEY_SHIFT - 1), 0))); })))) + (((~((0UL))) - (((1UL)) << (UVERBS_API_METHOD_KEY_SHIFT)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (UVERBS_API_METHOD_KEY_BITS + UVERBS_API_METHOD_KEY_SHIFT - 1))))),



 UVERBS_API_OBJ_KEY_BITS = 5,
 UVERBS_API_OBJ_KEY_SHIFT =
  UVERBS_API_METHOD_KEY_BITS + UVERBS_API_METHOD_KEY_SHIFT,
 UVERBS_API_OBJ_KEY_NUM_CORE = 20,
 UVERBS_API_OBJ_KEY_NUM_DRIVER =
  (1 << UVERBS_API_OBJ_KEY_BITS) - UVERBS_API_OBJ_KEY_NUM_CORE,
 UVERBS_API_OBJ_KEY_MASK = ((((int)(sizeof(struct { int:(-!!(__builtin_choose_expr( (sizeof(int) == sizeof(*(8 ? ((void *)((long)((UVERBS_API_OBJ_KEY_SHIFT) > (31)) * 0l)) : (int *)8))), (UVERBS_API_OBJ_KEY_SHIFT) > (31), 0))); })))) + (((~((0UL))) - (((1UL)) << (UVERBS_API_OBJ_KEY_SHIFT)) + 1) & (~((0UL)) >> ((8 * 4) - 1 - (31))))),


 UVERBS_API_KEY_ERR = 0xFFFFFFFF,
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_obj(u32 id)
{
 if (id & UVERBS_API_NS_FLAG) {
  id &= ~UVERBS_API_NS_FLAG;
  if (id >= UVERBS_API_OBJ_KEY_NUM_DRIVER)
   return UVERBS_API_KEY_ERR;
  id = id + UVERBS_API_OBJ_KEY_NUM_CORE;
 } else {
  if (id >= UVERBS_API_OBJ_KEY_NUM_CORE)
   return UVERBS_API_KEY_ERR;
 }

 return id << UVERBS_API_OBJ_KEY_SHIFT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) bool uapi_key_is_object(u32 key)
{
 return (key & ~UVERBS_API_OBJ_KEY_MASK) == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_ioctl_method(u32 id)
{
 if (id & UVERBS_API_NS_FLAG) {
  id &= ~UVERBS_API_NS_FLAG;
  if (id >= UVERBS_API_METHOD_KEY_NUM_DRIVER)
   return UVERBS_API_KEY_ERR;
  id = id + UVERBS_API_METHOD_KEY_NUM_CORE;
 } else {
  id++;
  if (id >= UVERBS_API_METHOD_KEY_NUM_CORE)
   return UVERBS_API_KEY_ERR;
 }

 return id << UVERBS_API_METHOD_KEY_SHIFT;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_write_method(u32 id)
{
 if (id >= UVERBS_API_WRITE_KEY_NUM)
  return UVERBS_API_KEY_ERR;
 return UVERBS_API_METHOD_IS_WRITE | id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_write_ex_method(u32 id)
{
 if (id >= UVERBS_API_WRITE_KEY_NUM)
  return UVERBS_API_KEY_ERR;
 return UVERBS_API_METHOD_IS_WRITE_EX | id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32
uapi_key_attr_to_ioctl_method(u32 attr_key)
{
 return attr_key &
        (UVERBS_API_OBJ_KEY_MASK | UVERBS_API_METHOD_KEY_MASK);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) bool uapi_key_is_ioctl_method(u32 key)
{
 unsigned int method = key & UVERBS_API_METHOD_KEY_MASK;

 return method != 0 && method < UVERBS_API_METHOD_IS_WRITE &&
        (key & UVERBS_API_ATTR_KEY_MASK) == 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) bool uapi_key_is_write_method(u32 key)
{
 return (key & UVERBS_API_METHOD_KEY_MASK) == UVERBS_API_METHOD_IS_WRITE;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) bool uapi_key_is_write_ex_method(u32 key)
{
 return (key & UVERBS_API_METHOD_KEY_MASK) ==
        UVERBS_API_METHOD_IS_WRITE_EX;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_attrs_start(u32 ioctl_method_key)
{

 return ioctl_method_key + 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_key_attr(u32 id)
{






 if (id & UVERBS_API_NS_FLAG) {
  id &= ~UVERBS_API_NS_FLAG;
  id++;
  if (id >= 1 << (UVERBS_API_ATTR_KEY_BITS - 1))
   return UVERBS_API_KEY_ERR;
  id = (id << 1) | 0;
 } else {
  if (id >= 1 << (UVERBS_API_ATTR_KEY_BITS - 1))
   return UVERBS_API_KEY_ERR;
  id = (id << 1) | 1;
 }

 return id;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) bool uapi_key_is_attr(u32 key)
{
 unsigned int method = key & UVERBS_API_METHOD_KEY_MASK;

 return method != 0 && method < UVERBS_API_METHOD_IS_WRITE &&
        (key & UVERBS_API_ATTR_KEY_MASK) != 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_bkey_attr(u32 attr_key)
{
 return attr_key - 1;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__const__)) u32 uapi_bkey_to_key_attr(u32 attr_bkey)
{
 return attr_bkey + 1;
}







struct uverbs_attr_def {
 u16 id;
 struct uverbs_attr_spec attr;
};

struct uverbs_method_def {
 u16 id;

 u32 flags;
 size_t num_attrs;
 const struct uverbs_attr_def * const (*attrs)[];
 int (*handler)(struct uverbs_attr_bundle *attrs);
};

struct uverbs_object_def {
 u16 id;
 const struct uverbs_obj_type *type_attrs;
 size_t num_methods;
 const struct uverbs_method_def * const (*methods)[];
};

enum uapi_definition_kind {
 UAPI_DEF_END = 0,
 UAPI_DEF_OBJECT_START,
 UAPI_DEF_WRITE,
 UAPI_DEF_CHAIN_OBJ_TREE,
 UAPI_DEF_CHAIN,
 UAPI_DEF_IS_SUPPORTED_FUNC,
 UAPI_DEF_IS_SUPPORTED_DEV_FN,
};

enum uapi_definition_scope {
 UAPI_SCOPE_OBJECT = 1,
 UAPI_SCOPE_METHOD = 2,
};

struct uapi_definition {
 u8 kind;
 u8 scope;
 union {
  struct {
   u16 object_id;
  } object_start;
  struct {
   u16 command_num;
   u8 is_ex:1;
   u8 has_udata:1;
   u8 has_resp:1;
   u8 req_size;
   u8 resp_size;
  } write;
 };

 union {
  bool (*func_is_supported)(struct ib_device *device);
  int (*func_write)(struct uverbs_attr_bundle *attrs);
  const struct uapi_definition *chain;
  const struct uverbs_object_def *chain_obj_tree;
  size_t needs_fn_offset;
 };
};
# 599 "../include/rdma/uverbs_ioctl.h"
struct uverbs_ptr_attr {




 union {
  void *ptr;
  u64 data;
 };
 u16 len;
 u16 uattr_idx;
 u8 enum_id;
};

struct uverbs_obj_attr {
 struct ib_uobject *uobject;
 const struct uverbs_api_attr *attr_elm;
};

struct uverbs_objs_arr_attr {
 struct ib_uobject **uobjects;
 u16 len;
};

struct uverbs_attr {
 union {
  struct uverbs_ptr_attr ptr_attr;
  struct uverbs_obj_attr obj_attr;
  struct uverbs_objs_arr_attr objs_arr_attr;
 };
};

struct uverbs_attr_bundle {

 union { struct { struct ib_udata driver_udata; struct ib_udata ucore; struct ib_uverbs_file *ufile; struct ib_ucontext *context; struct ib_uobject *uobject; unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))]; } ; struct uverbs_attr_bundle_hdr { struct ib_udata driver_udata; struct ib_udata ucore; struct ib_uverbs_file *ufile; struct ib_ucontext *context; struct ib_uobject *uobject; unsigned long attr_present[(((UVERBS_API_ATTR_BKEY_LEN) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))]; } hdr; } ;







 struct uverbs_attr attrs[];
};
_Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr), "struct member likely outside of struct_group_tagged()");


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uverbs_attr_is_valid(const struct uverbs_attr_bundle *attrs_bundle,
     unsigned int idx)
{
 return ((__builtin_constant_p(uapi_bkey_attr(uapi_key_attr(idx))) && __builtin_constant_p((uintptr_t)(attrs_bundle->attr_present) != (uintptr_t)((void *)0)) && (uintptr_t)(attrs_bundle->attr_present) != (uintptr_t)((void *)0) && __builtin_constant_p(*(const unsigned long *)(attrs_bundle->attr_present))) ? const_test_bit(uapi_bkey_attr(uapi_key_attr(idx)), attrs_bundle->attr_present) : arch_test_bit(uapi_bkey_attr(uapi_key_attr(idx)), attrs_bundle->attr_present));

}
# 663 "../include/rdma/uverbs_ioctl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct uverbs_attr_bundle *
rdma_udata_to_uverbs_attr_bundle(struct ib_udata *udata)
{
 return ({ void *__mptr = (void *)(udata); _Static_assert(__builtin_types_compatible_p(typeof(*(udata)), typeof(((struct uverbs_attr_bundle *)0)->driver_udata)) || __builtin_types_compatible_p(typeof(*(udata)), typeof(void)), "pointer type mismatch in container_of()"); ((struct uverbs_attr_bundle *)(__mptr - __builtin_offsetof(struct uverbs_attr_bundle, driver_udata))); });
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) const struct uverbs_attr *uverbs_attr_get(const struct uverbs_attr_bundle *attrs_bundle,
       u16 idx)
{
 if (!uverbs_attr_is_valid(attrs_bundle, idx))
  return ERR_PTR(-2);

 return &attrs_bundle->attrs[uapi_bkey_attr(uapi_key_attr(idx))];
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int uverbs_attr_get_enum_id(const struct uverbs_attr_bundle *attrs_bundle,
       u16 idx)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);

 if (IS_ERR(attr))
  return PTR_ERR(attr);

 return attr->ptr_attr.enum_id;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *uverbs_attr_get_obj(const struct uverbs_attr_bundle *attrs_bundle,
     u16 idx)
{
 const struct uverbs_attr *attr;

 attr = uverbs_attr_get(attrs_bundle, idx);
 if (IS_ERR(attr))
  return ERR_CAST(attr);

 return attr->obj_attr.uobject->object;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_uobject *uverbs_attr_get_uobject(const struct uverbs_attr_bundle *attrs_bundle,
        u16 idx)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);

 if (IS_ERR(attr))
  return ERR_CAST(attr);

 return attr->obj_attr.uobject;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_attr_get_len(const struct uverbs_attr_bundle *attrs_bundle, u16 idx)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);

 if (IS_ERR(attr))
  return PTR_ERR(attr);

 return attr->ptr_attr.len;
}

void uverbs_finalize_uobj_create(const struct uverbs_attr_bundle *attrs_bundle,
     u16 idx);
# 739 "../include/rdma/uverbs_ioctl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_attr_ptr_get_array_size(struct uverbs_attr_bundle *attrs, u16 idx,
          size_t elem_size)
{
 int size = uverbs_attr_get_len(attrs, idx);

 if (size < 0)
  return size;

 if (size % elem_size)
  return -22;

 return size / elem_size;
}
# 762 "../include/rdma/uverbs_ioctl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int uverbs_attr_get_uobjs_arr(
 const struct uverbs_attr_bundle *attrs_bundle, u16 attr_idx,
 struct ib_uobject ***arr)
{
 const struct uverbs_attr *attr =
   uverbs_attr_get(attrs_bundle, attr_idx);

 if (IS_ERR(attr)) {
  *arr = ((void *)0);
  return 0;
 }

 *arr = attr->objs_arr_attr.uobjects;

 return attr->objs_arr_attr.len;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool uverbs_attr_ptr_is_inline(const struct uverbs_attr *attr)
{
 return attr->ptr_attr.len <= sizeof(attr->ptr_attr.data);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *uverbs_attr_get_alloced_ptr(
 const struct uverbs_attr_bundle *attrs_bundle, u16 idx)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);

 if (IS_ERR(attr))
  return (void *)attr;

 return uverbs_attr_ptr_is_inline(attr) ? (void *)&attr->ptr_attr.data :
       attr->ptr_attr.ptr;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int _uverbs_copy_from(void *to,
        const struct uverbs_attr_bundle *attrs_bundle,
        size_t idx,
        size_t size)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);

 if (IS_ERR(attr))
  return PTR_ERR(attr);






 if (__builtin_expect(!!(size < attr->ptr_attr.len), 0))
  return -22;

 if (uverbs_attr_ptr_is_inline(attr))
  memcpy(to, &attr->ptr_attr.data, attr->ptr_attr.len);
 else if (copy_from_user(to, ( { ({ u64 __dummy; typeof((attr->ptr_attr.data)) __dummy2; (void)(&__dummy == &__dummy2); 1; }); (void *)(uintptr_t)(attr->ptr_attr.data); } ),
    attr->ptr_attr.len))
  return -14;

 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int _uverbs_copy_from_or_zero(void *to,
         const struct uverbs_attr_bundle *attrs_bundle,
         size_t idx,
         size_t size)
{
 const struct uverbs_attr *attr = uverbs_attr_get(attrs_bundle, idx);
 size_t min_size;

 if (IS_ERR(attr))
  return PTR_ERR(attr);

 min_size = ({ size_t __UNIQUE_ID_x_557 = (size); size_t __UNIQUE_ID_y_558 = (attr->ptr_attr.len); ((__UNIQUE_ID_x_557) < (__UNIQUE_ID_y_558) ? (__UNIQUE_ID_x_557) : (__UNIQUE_ID_y_558)); });

 if (uverbs_attr_ptr_is_inline(attr))
  memcpy(to, &attr->ptr_attr.data, min_size);
 else if (copy_from_user(to, ( { ({ u64 __dummy; typeof((attr->ptr_attr.data)) __dummy2; (void)(&__dummy == &__dummy2); 1; }); (void *)(uintptr_t)(attr->ptr_attr.data); } ),
    min_size))
  return -14;

 if (size > min_size)
  memset(to + min_size, 0, size - min_size);

 return 0;
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_ucontext *
ib_uverbs_get_ucontext(const struct uverbs_attr_bundle *attrs)
{
 return ib_uverbs_get_ucontext_file(attrs->ufile);
}
# 902 "../include/rdma/uverbs_ioctl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_get_flags64(u64 *to, const struct uverbs_attr_bundle *attrs_bundle,
     size_t idx, u64 allowed_bits)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_get_flags32(u32 *to, const struct uverbs_attr_bundle *attrs_bundle,
     size_t idx, u64 allowed_bits)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int uverbs_copy_to(const struct uverbs_attr_bundle *attrs_bundle,
     size_t idx, const void *from, size_t size)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__malloc__)) void *uverbs_alloc(struct uverbs_attr_bundle *bundle,
       size_t size)
{
 return ERR_PTR(-22);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__malloc__)) void *uverbs_zalloc(struct uverbs_attr_bundle *bundle,
        size_t size)
{
 return ERR_PTR(-22);
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
_uverbs_get_const(s64 *to, const struct uverbs_attr_bundle *attrs_bundle,
    size_t idx, s64 lower_bound, u64 upper_bound,
    s64 *def_val)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_copy_to_struct_or_zero(const struct uverbs_attr_bundle *bundle,
         size_t idx, const void *from, size_t size)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
_uverbs_get_const_signed(s64 *to,
    const struct uverbs_attr_bundle *attrs_bundle,
    size_t idx, s64 lower_bound, u64 upper_bound,
    s64 *def_val)
{
 return -22;
}
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
_uverbs_get_const_unsigned(u64 *to,
      const struct uverbs_attr_bundle *attrs_bundle,
      size_t idx, u64 upper_bound, u64 *def_val)
{
 return -22;
}
# 1015 "../include/rdma/uverbs_ioctl.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int
uverbs_get_raw_fd(int *to, const struct uverbs_attr_bundle *attrs_bundle,
    size_t idx)
{
 return ({ s64 _val; int _ret = _uverbs_get_const_signed(&_val, attrs_bundle, idx, ((typeof(typeof(*(to))))((typeof(typeof(*(to))))-((typeof(typeof(typeof(*(to)))))((((typeof(typeof(typeof(*(to)))))1 << (8*sizeof(typeof(typeof(typeof(*(to))))) - 1 - (((typeof(typeof(typeof(*(to)))))(-1)) < ( typeof(typeof(typeof(*(to)))))1))) - 1) + ((typeof(typeof(typeof(*(to)))))1 << (8*sizeof(typeof(typeof(typeof(*(to))))) - 1 - (((typeof(typeof(typeof(*(to)))))(-1)) < ( typeof(typeof(typeof(*(to)))))1)))))-(typeof(typeof(*(to))))1)), ((typeof(typeof(*(to))))((((typeof(typeof(*(to))))1 << (8*sizeof(typeof(typeof(*(to)))) - 1 - (((typeof(typeof(*(to))))(-1)) < ( typeof(typeof(*(to))))1))) - 1) + ((typeof(typeof(*(to))))1 << (8*sizeof(typeof(typeof(*(to)))) - 1 - (((typeof(typeof(*(to))))(-1)) < ( typeof(typeof(*(to))))1))))), ((void *)0)); (*(to)) = _val; _ret; });
}
# 11 "../include/rdma/uverbs_std_types.h" 2
# 34 "../include/rdma/uverbs_std_types.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *_uobj_get_obj_read(struct ib_uobject *uobj)
{
 if (IS_ERR(uobj))
  return ((void *)0);
 return uobj->object;
}
# 49 "../include/rdma/uverbs_std_types.h"
int __uobj_perform_destroy(const struct uverbs_api_object *obj, u32 id,
      struct uverbs_attr_bundle *attrs);




struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj,
          u32 id, struct uverbs_attr_bundle *attrs);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uobj_put_destroy(struct ib_uobject *uobj)
{
 rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_DESTROY);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uobj_put_read(struct ib_uobject *uobj)
{
 rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_READ);
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uobj_put_write(struct ib_uobject *uobj)
{
 rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uobj_alloc_abort(struct ib_uobject *uobj,
        struct uverbs_attr_bundle *attrs)
{
 rdma_alloc_abort_uobject(uobj, attrs, false);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uobj_finalize_uobj_create(struct ib_uobject *uobj,
          struct uverbs_attr_bundle *attrs)
{







 ({ int __ret_warn_on = !!(attrs->uobject); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("include/rdma/uverbs_std_types.h", 96, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); });
 attrs->uobject = uobj;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_uobject *
__uobj_alloc(const struct uverbs_api_object *obj,
      struct uverbs_attr_bundle *attrs, struct ib_device **ib_dev)
{
 struct ib_uobject *uobj = rdma_alloc_begin_uobject(obj, attrs);

 if (!IS_ERR(uobj))
  *ib_dev = attrs->context->device;
 return uobj;
}




static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void uverbs_flow_action_fill_action(struct ib_flow_action *action,
        struct ib_uobject *uobj,
        struct ib_device *ib_dev,
        enum ib_flow_action_type type)
{
 atomic_set(&action->usecnt, 0);
 action->device = ib_dev;
 action->type = type;
 action->uobject = uobj;
 uobj->object = action;
}

struct ib_uflow_resources {
 size_t max;
 size_t num;
 size_t collection_num;
 size_t counters_num;
 struct ib_counters **counters;
 struct ib_flow_action **collection;
};

struct ib_uflow_object {
 struct ib_uobject uobject;
 struct ib_uflow_resources *resources;
};

struct ib_uflow_resources *flow_resources_alloc(size_t num_specs);
void flow_resources_add(struct ib_uflow_resources *uflow_res,
   enum ib_flow_spec_type type,
   void *ibobj);
void ib_uverbs_flow_resources_free(struct ib_uflow_resources *uflow_res);

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_set_flow(struct ib_uobject *uobj, struct ib_flow *ibflow,
          struct ib_qp *qp, struct ib_device *device,
          struct ib_uflow_resources *uflow_res)
{
 struct ib_uflow_object *uflow;

 uobj->object = ibflow;
 ibflow->uobject = uobj;

 if (qp) {
  atomic_inc(&qp->usecnt);
  ibflow->qp = qp;
 }

 ibflow->device = device;
 uflow = ({ void *__mptr = (void *)(uobj); _Static_assert(__builtin_types_compatible_p(typeof(*(uobj)), typeof(((typeof(*uflow) *)0)->uobject)) || __builtin_types_compatible_p(typeof(*(uobj)), typeof(void)), "pointer type mismatch in container_of()"); ((typeof(*uflow) *)(__mptr - __builtin_offsetof(typeof(*uflow), uobject))); });
 uflow->resources = uflow_res;
}

struct uverbs_api_object {
 const struct uverbs_obj_type *type_attrs;
 const struct uverbs_obj_type_class *type_class;
 u8 disabled:1;
 u32 id;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 uobj_get_object_id(struct ib_uobject *uobj)
{
 return uobj->uapi_object->id;
}
# 50 "../drivers/infiniband/core/uverbs.h" 2


# 1 "../include/rdma/uverbs_named_ioctl.h" 1
# 53 "../drivers/infiniband/core/uverbs.h" 2

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
ib_uverbs_init_udata(struct ib_udata *udata,
       const void *ibuf,
       void *obuf,
       size_t ilen, size_t olen)
{
 udata->inbuf = ibuf;
 udata->outbuf = obuf;
 udata->inlen = ilen;
 udata->outlen = olen;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void
ib_uverbs_init_udata_buf_or_null(struct ib_udata *udata,
     const void *ibuf,
     void *obuf,
     size_t ilen, size_t olen)
{
 ib_uverbs_init_udata(udata,
        ilen ? ibuf : ((void *)0), olen ? obuf : ((void *)0),
        ilen, olen);
}
# 99 "../drivers/infiniband/core/uverbs.h"
struct ib_uverbs_device {
 refcount_t refcount;
 u32 num_comp_vectors;
 struct completion comp;
 struct device dev;

 const struct attribute_group *groups[2];
 struct ib_device *ib_dev;
 int devnum;
 struct cdev cdev;
 struct rb_root xrcd_tree;
 struct mutex xrcd_tree_mutex;
 struct srcu_struct disassociate_srcu;
 struct mutex lists_mutex;
 struct list_head uverbs_file_list;
 struct uverbs_api *uapi;
};

struct ib_uverbs_event_queue {
 spinlock_t lock;
 int is_closed;
 wait_queue_head_t poll_wait;
 struct fasync_struct *async_queue;
 struct list_head event_list;
};

struct ib_uverbs_async_event_file {
 struct ib_uobject uobj;
 struct ib_uverbs_event_queue ev_queue;
 struct ib_event_handler event_handler;
};

struct ib_uverbs_completion_event_file {
 struct ib_uobject uobj;
 struct ib_uverbs_event_queue ev_queue;
};

struct ib_uverbs_file {
 struct kref ref;
 struct ib_uverbs_device *device;
 struct mutex ucontext_lock;




 struct ib_ucontext *ucontext;
 struct ib_uverbs_async_event_file *default_async_file;
 struct list_head list;







 struct rw_semaphore hw_destroy_rwsem;
 spinlock_t uobjects_lock;
 struct list_head uobjects;

 struct mutex umap_lock;
 struct list_head umaps;
 struct page *disassociate_page;

 struct xarray idr;
};

struct ib_uverbs_event {
 union {
  struct ib_uverbs_async_event_desc async;
  struct ib_uverbs_comp_event_desc comp;
 } desc;
 struct list_head list;
 struct list_head obj_list;
 u32 *counter;
};

struct ib_uverbs_mcast_entry {
 struct list_head list;
 union ib_gid gid;
 u16 lid;
};

struct ib_uevent_object {
 struct ib_uobject uobject;
 struct ib_uverbs_async_event_file *event_file;

 struct list_head event_list;
 u32 events_reported;
};

struct ib_uxrcd_object {
 struct ib_uobject uobject;
 atomic_t refcnt;
};

struct ib_usrq_object {
 struct ib_uevent_object uevent;
 struct ib_uxrcd_object *uxrcd;
};

struct ib_uqp_object {
 struct ib_uevent_object uevent;

 struct mutex mcast_lock;
 struct list_head mcast_list;
 struct ib_uxrcd_object *uxrcd;
};

struct ib_uwq_object {
 struct ib_uevent_object uevent;
};

struct ib_ucq_object {
 struct ib_uevent_object uevent;
 struct list_head comp_list;
 u32 comp_events_reported;
};

extern const struct file_operations uverbs_event_fops;
extern const struct file_operations uverbs_async_event_fops;
void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue);
void ib_uverbs_init_async_event_file(struct ib_uverbs_async_event_file *ev_file);
void ib_uverbs_free_event_queue(struct ib_uverbs_event_queue *event_queue);
void ib_uverbs_flow_resources_free(struct ib_uflow_resources *uflow_res);
int uverbs_async_event_release(struct inode *inode, struct file *filp);

int ib_alloc_ucontext(struct uverbs_attr_bundle *attrs);
int ib_init_ucontext(struct uverbs_attr_bundle *attrs);

void ib_uverbs_release_ucq(struct ib_uverbs_completion_event_file *ev_file,
      struct ib_ucq_object *uobj);
void ib_uverbs_release_uevent(struct ib_uevent_object *uobj);
void ib_uverbs_release_file(struct kref *ref);
void ib_uverbs_async_handler(struct ib_uverbs_async_event_file *async_file,
        __u64 element, __u64 event,
        struct list_head *obj_list, u32 *counter);

void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context);
void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_wq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr);
int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, struct ib_xrcd *xrcd,
      enum rdma_remove_reason why,
      struct uverbs_attr_bundle *attrs);

int uverbs_dealloc_mw(struct ib_mw *mw);
void ib_uverbs_detach_umcast(struct ib_qp *qp,
        struct ib_uqp_object *uobj);

long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);

struct ib_uverbs_flow_spec {
 union {
  union {
   struct ib_uverbs_flow_spec_hdr hdr;
   struct {
    __u32 type;
    __u16 size;
    __u16 reserved;
   };
  };
  struct ib_uverbs_flow_spec_eth eth;
  struct ib_uverbs_flow_spec_ipv4 ipv4;
  struct ib_uverbs_flow_spec_esp esp;
  struct ib_uverbs_flow_spec_tcp_udp tcp_udp;
  struct ib_uverbs_flow_spec_ipv6 ipv6;
  struct ib_uverbs_flow_spec_action_tag flow_tag;
  struct ib_uverbs_flow_spec_action_drop drop;
  struct ib_uverbs_flow_spec_action_handle action;
  struct ib_uverbs_flow_spec_action_count flow_count;
 };
};

int ib_uverbs_kern_spec_to_ib_spec_filter(enum ib_flow_spec_type type,
       const void *kern_spec_mask,
       const void *kern_spec_val,
       size_t kern_filter_sz,
       union ib_flow_spec *ib_spec);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 make_port_cap_flags(const struct ib_port_attr *attr)
{
 u32 res;






 res = attr->port_cap_flags & ~(u32)IB_UVERBS_PCF_IP_BASED_GIDS;

 if (attr->ip_gids)
  res |= IB_UVERBS_PCF_IP_BASED_GIDS;

 return res;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct ib_uverbs_async_event_file *
ib_uverbs_get_async_event(struct uverbs_attr_bundle *attrs,
     u16 id)
{
 struct ib_uobject *async_ev_file_uobj;
 struct ib_uverbs_async_event_file *async_ev_file;

 async_ev_file_uobj = uverbs_attr_get_uobject(attrs, id);
 if (IS_ERR(async_ev_file_uobj))
  async_ev_file = ({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_559(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(attrs->ufile->default_async_file) == sizeof(char) || sizeof(attrs->ufile->default_async_file) == sizeof(short) || sizeof(attrs->ufile->default_async_file) == sizeof(int) || sizeof(attrs->ufile->default_async_file) == sizeof(long)) || sizeof(attrs->ufile->default_async_file) == sizeof(long long))) __compiletime_assert_559(); } while (0); (*(const volatile typeof( _Generic((attrs->ufile->default_async_file), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: (attrs->ufile->default_async_file))) *)&(attrs->ufile->default_async_file)); });
 else
  async_ev_file = ({ void *__mptr = (void *)(async_ev_file_uobj); _Static_assert(__builtin_types_compatible_p(typeof(*(async_ev_file_uobj)), typeof(((struct ib_uverbs_async_event_file *)0)->uobj)) || __builtin_types_compatible_p(typeof(*(async_ev_file_uobj)), typeof(void)), "pointer type mismatch in container_of()"); ((struct ib_uverbs_async_event_file *)(__mptr - __builtin_offsetof(struct ib_uverbs_async_event_file, uobj))); });


 if (async_ev_file)
  uverbs_uobject_get(&async_ev_file->uobj);
 return async_ev_file;
}

void copy_port_attr_to_resp(struct ib_port_attr *attr,
       struct ib_uverbs_query_port_resp *resp,
       struct ib_device *ib_dev, u8 port_num);
# 9 "../drivers/infiniband/core/ib_core_uverbs.c" 2
# 1 "../drivers/infiniband/core/core_priv.h" 1
# 40 "../drivers/infiniband/core/core_priv.h"
# 1 "../include/net/netns/generic.h" 1
# 29 "../include/net/netns/generic.h"
struct net_generic {
 union {
  struct {
   unsigned int len;
   struct callback_head rcu;
  } s;

  struct { struct { } __empty_ptr; void * ptr[]; };
 };
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void *net_generic(const struct net *net, unsigned int id)
{
 struct net_generic *ng;
 void *ptr;

 rcu_read_lock();
 ng = ({ typeof(*(net->gen)) *__UNIQUE_ID_rcu560 = (typeof(*(net->gen)) *)({ do { __attribute__((__noreturn__)) extern void __compiletime_assert_561(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof((net->gen)) == sizeof(char) || sizeof((net->gen)) == sizeof(short) || sizeof((net->gen)) == sizeof(int) || sizeof((net->gen)) == sizeof(long)) || sizeof((net->gen)) == sizeof(long long))) __compiletime_assert_561(); } while (0); (*(const volatile typeof( _Generic(((net->gen)), char: (char)0, unsigned char: (unsigned char)0, signed char: (signed char)0, unsigned short: (unsigned short)0, signed short: (signed short)0, unsigned int: (unsigned int)0, signed int: (signed int)0, unsigned long: (unsigned long)0, signed long: (signed long)0, unsigned long long: (unsigned long long)0, signed long long: (signed long long)0, default: ((net->gen)))) *)&((net->gen))); }); do { static bool __attribute__((__section__(".data.unlikely"))) __warned; if (debug_lockdep_rcu_enabled() && (!((0) || rcu_read_lock_held())) && debug_lockdep_rcu_enabled() && !__warned) { __warned = true; lockdep_rcu_suspicious("include/net/netns/generic.h", 46, "suspicious rcu_dereference_check() usage"); } } while (0); ; ((typeof(*(net->gen)) *)(__UNIQUE_ID_rcu560)); });
 ptr = ng->ptr[id];
 rcu_read_unlock();

 return ptr;
}
# 41 "../drivers/infiniband/core/core_priv.h" 2


# 1 "../include/rdma/opa_addr.h" 1








# 1 "../include/rdma/opa_smi.h" 1








# 1 "../include/rdma/ib_mad.h" 1
# 120 "../include/rdma/ib_mad.h"
enum {
 IB_MGMT_MAD_HDR = 24,
 IB_MGMT_MAD_DATA = 232,
 IB_MGMT_RMPP_HDR = 36,
 IB_MGMT_RMPP_DATA = 220,
 IB_MGMT_VENDOR_HDR = 40,
 IB_MGMT_VENDOR_DATA = 216,
 IB_MGMT_SA_HDR = 56,
 IB_MGMT_SA_DATA = 200,
 IB_MGMT_DEVICE_HDR = 64,
 IB_MGMT_DEVICE_DATA = 192,
 IB_MGMT_MAD_SIZE = IB_MGMT_MAD_HDR + IB_MGMT_MAD_DATA,
 OPA_MGMT_MAD_DATA = 2024,
 OPA_MGMT_RMPP_DATA = 2012,
 OPA_MGMT_MAD_SIZE = IB_MGMT_MAD_HDR + OPA_MGMT_MAD_DATA,
};

struct ib_mad_hdr {
 u8 base_version;
 u8 mgmt_class;
 u8 class_version;
 u8 method;
 __be16 status;
 __be16 class_specific;
 __be64 tid;
 __be16 attr_id;
 __be16 resv;
 __be32 attr_mod;
};

struct ib_rmpp_hdr {
 u8 rmpp_version;
 u8 rmpp_type;
 u8 rmpp_rtime_flags;
 u8 rmpp_status;
 __be32 seg_num;
 __be32 paylen_newwin;
};

typedef u64 ib_sa_comp_mask;
# 169 "../include/rdma/ib_mad.h"
struct ib_sa_hdr {
 __be64 sm_key;
 __be16 attr_offset;
 __be16 reserved;
 ib_sa_comp_mask comp_mask;
} __attribute__((__packed__));

struct ib_mad {
 struct ib_mad_hdr mad_hdr;
 u8 data[IB_MGMT_MAD_DATA];
};

struct opa_mad {
 struct ib_mad_hdr mad_hdr;
 u8 data[OPA_MGMT_MAD_DATA];
};

struct ib_rmpp_mad {
 struct ib_mad_hdr mad_hdr;
 struct ib_rmpp_hdr rmpp_hdr;
 u8 data[IB_MGMT_RMPP_DATA];
};

struct opa_rmpp_mad {
 struct ib_mad_hdr mad_hdr;
 struct ib_rmpp_hdr rmpp_hdr;
 u8 data[OPA_MGMT_RMPP_DATA];
};

struct ib_sa_mad {
 struct ib_mad_hdr mad_hdr;
 struct ib_rmpp_hdr rmpp_hdr;
 struct ib_sa_hdr sa_hdr;
 u8 data[IB_MGMT_SA_DATA];
} __attribute__((__packed__));

struct ib_vendor_mad {
 struct ib_mad_hdr mad_hdr;
 struct ib_rmpp_hdr rmpp_hdr;
 u8 reserved;
 u8 oui[3];
 u8 data[IB_MGMT_VENDOR_DATA];
};






struct ib_class_port_info {
 u8 base_version;
 u8 class_version;
 __be16 capability_mask;

 __be32 cap_mask2_resp_time;
 u8 redirect_gid[16];
 __be32 redirect_tcslfl;
 __be16 redirect_lid;
 __be16 redirect_pkey;
 __be32 redirect_qp;
 __be32 redirect_qkey;
 u8 trap_gid[16];
 __be32 trap_tcslfl;
 __be16 trap_lid;
 __be16 trap_pkey;
 __be32 trap_hlqp;
 __be32 trap_qkey;
};


enum ib_port_capability_mask_bits {
 IB_PORT_SM = 1 << 1,
 IB_PORT_NOTICE_SUP = 1 << 2,
 IB_PORT_TRAP_SUP = 1 << 3,
 IB_PORT_OPT_IPD_SUP = 1 << 4,
 IB_PORT_AUTO_MIGR_SUP = 1 << 5,
 IB_PORT_SL_MAP_SUP = 1 << 6,
 IB_PORT_MKEY_NVRAM = 1 << 7,
 IB_PORT_PKEY_NVRAM = 1 << 8,
 IB_PORT_LED_INFO_SUP = 1 << 9,
 IB_PORT_SM_DISABLED = 1 << 10,
 IB_PORT_SYS_IMAGE_GUID_SUP = 1 << 11,
 IB_PORT_PKEY_SW_EXT_PORT_TRAP_SUP = 1 << 12,
 IB_PORT_EXTENDED_SPEEDS_SUP = 1 << 14,
 IB_PORT_CAP_MASK2_SUP = 1 << 15,
 IB_PORT_CM_SUP = 1 << 16,
 IB_PORT_SNMP_TUNNEL_SUP = 1 << 17,
 IB_PORT_REINIT_SUP = 1 << 18,
 IB_PORT_DEVICE_MGMT_SUP = 1 << 19,
 IB_PORT_VENDOR_CLASS_SUP = 1 << 20,
 IB_PORT_DR_NOTICE_SUP = 1 << 21,
 IB_PORT_CAP_MASK_NOTICE_SUP = 1 << 22,
 IB_PORT_BOOT_MGMT_SUP = 1 << 23,
 IB_PORT_LINK_LATENCY_SUP = 1 << 24,
 IB_PORT_CLIENT_REG_SUP = 1 << 25,
 IB_PORT_OTHER_LOCAL_CHANGES_SUP = 1 << 26,
 IB_PORT_LINK_SPEED_WIDTH_TABLE_SUP = 1 << 27,
 IB_PORT_VENDOR_SPECIFIC_MADS_TABLE_SUP = 1 << 28,
 IB_PORT_MCAST_PKEY_TRAP_SUPPRESSION_SUP = 1 << 29,
 IB_PORT_MCAST_FDB_TOP_SUP = 1 << 30,
 IB_PORT_HIERARCHY_INFO_SUP = 1ULL << 31,
};

enum ib_port_capability_mask2_bits {
 IB_PORT_SET_NODE_DESC_SUP = 1 << 0,
 IB_PORT_EX_PORT_INFO_EX_SUP = 1 << 1,
 IB_PORT_VIRT_SUP = 1 << 2,
 IB_PORT_SWITCH_PORT_STATE_TABLE_SUP = 1 << 3,
 IB_PORT_LINK_WIDTH_2X_SUP = 1 << 4,
 IB_PORT_LINK_SPEED_HDR_SUP = 1 << 5,
 IB_PORT_LINK_SPEED_NDR_SUP = 1 << 10,
 IB_PORT_EXTENDED_SPEEDS2_SUP = 1 << 11,
 IB_PORT_LINK_SPEED_XDR_SUP = 1 << 12,
};



struct opa_class_port_info {
 u8 base_version;
 u8 class_version;
 __be16 cap_mask;
 __be32 cap_mask2_resp_time;

 u8 redirect_gid[16];
 __be32 redirect_tc_fl;
 __be32 redirect_lid;
 __be32 redirect_sl_qp;
 __be32 redirect_qkey;

 u8 trap_gid[16];
 __be32 trap_tc_fl;
 __be32 trap_lid;
 __be32 trap_hl_qp;
 __be32 trap_qkey;

 __be16 trap_pkey;
 __be16 redirect_pkey;

 u8 trap_sl_rsvd;
 u8 reserved[3];
} __attribute__((__packed__));






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 ib_get_cpi_resp_time(struct ib_class_port_info *cpi)
{
 return (u8)((__u32)(__builtin_constant_p(( __u32)(__be32)(cpi->cap_mask2_resp_time)) ? ((__u32)( (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(cpi->cap_mask2_resp_time))) &
      0x1F);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_set_cpi_resp_time(struct ib_class_port_info *cpi,
     u8 rtime)
{
 cpi->cap_mask2_resp_time =
  (cpi->cap_mask2_resp_time &
   (( __be32)(__u32)(__builtin_constant_p((~0x1F)) ? ((__u32)( (((__u32)((~0x1F)) & (__u32)0x000000ffUL) << 24) | (((__u32)((~0x1F)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((~0x1F)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((~0x1F)) & (__u32)0xff000000UL) >> 24))) : __fswab32((~0x1F))))) |
  (( __be32)(__u32)(__builtin_constant_p((rtime & 0x1F)) ? ((__u32)( (((__u32)((rtime & 0x1F)) & (__u32)0x000000ffUL) << 24) | (((__u32)((rtime & 0x1F)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((rtime & 0x1F)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((rtime & 0x1F)) & (__u32)0xff000000UL) >> 24))) : __fswab32((rtime & 0x1F))));
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 ib_get_cpi_capmask2(struct ib_class_port_info *cpi)
{
 return ((__u32)(__builtin_constant_p(( __u32)(__be32)(cpi->cap_mask2_resp_time)) ? ((__u32)( (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(cpi->cap_mask2_resp_time))) >>
  5);
}







static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_set_cpi_capmask2(struct ib_class_port_info *cpi,
           u32 capmask2)
{
 cpi->cap_mask2_resp_time =
  (cpi->cap_mask2_resp_time &
   (( __be32)(__u32)(__builtin_constant_p((0x1F)) ? ((__u32)( (((__u32)((0x1F)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0x1F)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0x1F)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0x1F)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0x1F))))) |
  (( __be32)(__u32)(__builtin_constant_p((capmask2 << 5)) ? ((__u32)( (((__u32)((capmask2 << 5)) & (__u32)0x000000ffUL) << 24) | (((__u32)((capmask2 << 5)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((capmask2 << 5)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((capmask2 << 5)) & (__u32)0xff000000UL) >> 24))) : __fswab32((capmask2 << 5))));

}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 opa_get_cpi_capmask2(struct opa_class_port_info *cpi)
{
 return ((__u32)(__builtin_constant_p(( __u32)(__be32)(cpi->cap_mask2_resp_time)) ? ((__u32)( (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(cpi->cap_mask2_resp_time)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(cpi->cap_mask2_resp_time))) >>
  5);
}

struct ib_mad_notice_attr {
 u8 generic_type;
 u8 prod_type_msb;
 __be16 prod_type_lsb;
 __be16 trap_num;
 __be16 issuer_lid;
 __be16 toggle_count;

 union {
  struct {
   u8 details[54];
  } raw_data;

  struct {
   __be16 reserved;
   __be16 lid;
   u8 port_num;
  } __attribute__((__packed__)) ntc_129_131;

  struct {
   __be16 reserved;
   __be16 lid;
   u8 reserved2;
   u8 local_changes;
   __be32 new_cap_mask;
   u8 reserved3;
   u8 change_flags;
  } __attribute__((__packed__)) ntc_144;

  struct {
   __be16 reserved;
   __be16 lid;
   __be16 reserved2;
   __be64 new_sys_guid;
  } __attribute__((__packed__)) ntc_145;

  struct {
   __be16 reserved;
   __be16 lid;
   __be16 dr_slid;
   u8 method;
   u8 reserved2;
   __be16 attr_id;
   __be32 attr_mod;
   __be64 mkey;
   u8 reserved3;
   u8 dr_trunc_hop;
   u8 dr_rtn_path[30];
  } __attribute__((__packed__)) ntc_256;

  struct {
   __be16 reserved;
   __be16 lid1;
   __be16 lid2;
   __be32 key;
   __be32 sl_qp1;
   __be32 qp2;
   union ib_gid gid1;
   union ib_gid gid2;
  } __attribute__((__packed__)) ntc_257_258;

 } details;
};
# 465 "../include/rdma/ib_mad.h"
struct ib_mad_send_buf {
 struct ib_mad_send_buf *next;
 void *mad;
 struct ib_mad_agent *mad_agent;
 struct ib_ah *ah;
 void *context[2];
 int hdr_len;
 int data_len;
 int seg_count;
 int seg_size;
 int seg_rmpp_size;
 int timeout_ms;
 int retries;
};





int ib_response_mad(const struct ib_mad_hdr *hdr);





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 ib_get_rmpp_resptime(struct ib_rmpp_hdr *rmpp_hdr)
{
 return rmpp_hdr->rmpp_rtime_flags >> 3;
}





static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 ib_get_rmpp_flags(struct ib_rmpp_hdr *rmpp_hdr)
{
 return rmpp_hdr->rmpp_rtime_flags & 0x7;
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_set_rmpp_resptime(struct ib_rmpp_hdr *rmpp_hdr, u8 rtime)
{
 rmpp_hdr->rmpp_rtime_flags = ib_get_rmpp_flags(rmpp_hdr) | (rtime << 3);
}






static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_set_rmpp_flags(struct ib_rmpp_hdr *rmpp_hdr, u8 flags)
{
 rmpp_hdr->rmpp_rtime_flags = (rmpp_hdr->rmpp_rtime_flags & 0xF8) |
         (flags & 0x7);
}

struct ib_mad_agent;
struct ib_mad_send_wc;
struct ib_mad_recv_wc;






typedef void (*ib_mad_send_handler)(struct ib_mad_agent *mad_agent,
        struct ib_mad_send_wc *mad_send_wc);
# 548 "../include/rdma/ib_mad.h"
typedef void (*ib_mad_recv_handler)(struct ib_mad_agent *mad_agent,
        struct ib_mad_send_buf *send_buf,
        struct ib_mad_recv_wc *mad_recv_wc);
# 567 "../include/rdma/ib_mad.h"
enum {
 IB_MAD_USER_RMPP = IB_USER_MAD_USER_RMPP,
};
struct ib_mad_agent {
 struct ib_device *device;
 struct ib_qp *qp;
 ib_mad_recv_handler recv_handler;
 ib_mad_send_handler send_handler;
 void *context;
 u32 hi_tid;
 u32 flags;
 void *security;
 struct list_head mad_agent_sec_list;
 u8 port_num;
 u8 rmpp_version;
 bool smp_allowed;
};
# 592 "../include/rdma/ib_mad.h"
struct ib_mad_send_wc {
 struct ib_mad_send_buf *send_buf;
 enum ib_wc_status status;
 u32 vendor_err;
};
# 606 "../include/rdma/ib_mad.h"
struct ib_mad_recv_buf {
 struct list_head list;
 struct ib_grh *grh;
 union {
  struct ib_mad *mad;
  struct opa_mad *opa_mad;
 };
};
# 626 "../include/rdma/ib_mad.h"
struct ib_mad_recv_wc {
 struct ib_wc *wc;
 struct ib_mad_recv_buf recv_buf;
 struct list_head rmpp_list;
 int mad_len;
 size_t mad_seg_size;
};
# 647 "../include/rdma/ib_mad.h"
struct ib_mad_reg_req {
 u8 mgmt_class;
 u8 mgmt_class_version;
 u8 oui[3];
 unsigned long method_mask[(((128) + ((sizeof(long) * 8)) - 1) / ((sizeof(long) * 8)))];
};
# 673 "../include/rdma/ib_mad.h"
struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
        u32 port_num,
        enum ib_qp_type qp_type,
        struct ib_mad_reg_req *mad_reg_req,
        u8 rmpp_version,
        ib_mad_send_handler send_handler,
        ib_mad_recv_handler recv_handler,
        void *context,
        u32 registration_flags);







void ib_unregister_mad_agent(struct ib_mad_agent *mad_agent);
# 710 "../include/rdma/ib_mad.h"
int ib_post_send_mad(struct ib_mad_send_buf *send_buf,
       struct ib_mad_send_buf **bad_send_buf);
# 721 "../include/rdma/ib_mad.h"
void ib_free_recv_mad(struct ib_mad_recv_wc *mad_recv_wc);
# 731 "../include/rdma/ib_mad.h"
int ib_modify_mad(struct ib_mad_send_buf *send_buf, u32 timeout_ms);
# 740 "../include/rdma/ib_mad.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_cancel_mad(struct ib_mad_send_buf *send_buf)
{
 ib_modify_mad(send_buf, 0);
}
# 772 "../include/rdma/ib_mad.h"
struct ib_mad_send_buf *ib_create_send_mad(struct ib_mad_agent *mad_agent,
        u32 remote_qpn, u16 pkey_index,
        int rmpp_active,
        int hdr_len, int data_len,
        gfp_t gfp_mask,
        u8 base_version);
# 786 "../include/rdma/ib_mad.h"
int ib_is_mad_class_rmpp(u8 mgmt_class);
# 796 "../include/rdma/ib_mad.h"
int ib_get_mad_data_offset(u8 mgmt_class);
# 806 "../include/rdma/ib_mad.h"
void *ib_get_rmpp_segment(struct ib_mad_send_buf *send_buf, int seg_num);





void ib_free_send_mad(struct ib_mad_send_buf *send_buf);






int ib_mad_kernel_rmpp_agent(const struct ib_mad_agent *agent);
# 10 "../include/rdma/opa_smi.h" 2
# 1 "../include/rdma/ib_smi.h" 1
# 18 "../include/rdma/ib_smi.h"
struct ib_smp {
 u8 base_version;
 u8 mgmt_class;
 u8 class_version;
 u8 method;
 __be16 status;
 u8 hop_ptr;
 u8 hop_cnt;
 __be64 tid;
 __be16 attr_id;
 __be16 resv;
 __be32 attr_mod;
 __be64 mkey;
 __be16 dr_slid;
 __be16 dr_dlid;
 u8 reserved[28];
 u8 data[64];
 u8 initial_path[64];
 u8 return_path[64];
} __attribute__((__packed__));
# 59 "../include/rdma/ib_smi.h"
struct ib_port_info {
 __be64 mkey;
 __be64 gid_prefix;
 __be16 lid;
 __be16 sm_lid;
 __be32 cap_mask;
 __be16 diag_code;
 __be16 mkey_lease_period;
 u8 local_port_num;
 u8 link_width_enabled;
 u8 link_width_supported;
 u8 link_width_active;
 u8 linkspeed_portstate;
 u8 portphysstate_linkdown;
 u8 mkeyprot_resv_lmc;
 u8 linkspeedactive_enabled;
 u8 neighbormtu_mastersmsl;
 u8 vlcap_inittype;
 u8 vl_high_limit;
 u8 vl_arb_high_cap;
 u8 vl_arb_low_cap;
 u8 inittypereply_mtucap;
 u8 vlstallcnt_hoqlife;
 u8 operationalvl_pei_peo_fpi_fpo;
 __be16 mkey_violations;
 __be16 pkey_violations;
 __be16 qkey_violations;
 u8 guid_cap;
 u8 clientrereg_resv_subnetto;
 u8 resv_resptimevalue;
 u8 localphyerrors_overrunerrors;
 __be16 max_credit_hint;
 u8 resv;
 u8 link_roundtrip_latency[3];
};

struct ib_node_info {
 u8 base_version;
 u8 class_version;
 u8 node_type;
 u8 num_ports;
 __be64 sys_guid;
 __be64 node_guid;
 __be64 port_guid;
 __be16 partition_cap;
 __be16 device_id;
 __be32 revision;
 u8 local_port_num;
 u8 vendor_id[3];
} __attribute__((__packed__));

struct ib_vl_weight_elem {
 u8 vl;

 u8 weight;
};

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8
ib_get_smp_direction(struct ib_smp *smp)
{
 return ((smp->status & (( __be16)(__u16)(__builtin_constant_p((0x8000)) ? ((__u16)( (((__u16)((0x8000)) & (__u16)0x00ffU) << 8) | (((__u16)((0x8000)) & (__u16)0xff00U) >> 8))) : __fswab16((0x8000))))) == (( __be16)(__u16)(__builtin_constant_p((0x8000)) ? ((__u16)( (((__u16)((0x8000)) & (__u16)0x00ffU) << 8) | (((__u16)((0x8000)) & (__u16)0xff00U) >> 8))) : __fswab16((0x8000)))));
}
# 151 "../include/rdma/ib_smi.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_init_query_mad(struct ib_smp *mad)
{
 mad->base_version = 1;
 mad->mgmt_class = 0x01;
 mad->class_version = 1;
 mad->method = 0x01;
}
# 11 "../include/rdma/opa_smi.h" 2
# 22 "../include/rdma/opa_smi.h"
struct opa_smp {
 u8 base_version;
 u8 mgmt_class;
 u8 class_version;
 u8 method;
 __be16 status;
 u8 hop_ptr;
 u8 hop_cnt;
 __be64 tid;
 __be16 attr_id;
 __be16 resv;
 __be32 attr_mod;
 __be64 mkey;
 union {
  struct {
   uint8_t data[2016];
  } lid;
  struct {
   __be32 dr_slid;
   __be32 dr_dlid;
   u8 initial_path[64];
   u8 return_path[64];
   u8 reserved[8];
   u8 data[1872];
  } dr;
 } route;
} __attribute__((__packed__));
# 72 "../include/rdma/opa_smi.h"
struct opa_node_description {
 u8 data[64];
} __attribute__((__packed__));

struct opa_node_info {
 u8 base_version;
 u8 class_version;
 u8 node_type;
 u8 num_ports;
 __be32 reserved;
 __be64 system_image_guid;
 __be64 node_guid;
 __be64 port_guid;
 __be16 partition_cap;
 __be16 device_id;
 __be32 revision;
 u8 local_port_num;
 u8 vendor_id[3];
} __attribute__((__packed__));



static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8
opa_get_smp_direction(struct opa_smp *smp)
{
 return ib_get_smp_direction((struct ib_smp *)smp);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u8 *opa_get_smp_data(struct opa_smp *smp)
{
 if (smp->mgmt_class == 0x81)
  return smp->route.dr.data;

 return smp->route.lid.data;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t opa_get_smp_data_size(struct opa_smp *smp)
{
 if (smp->mgmt_class == 0x81)
  return sizeof(smp->route.dr.data);

 return sizeof(smp->route.lid.data);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) size_t opa_get_smp_header_size(struct opa_smp *smp)
{
 if (smp->mgmt_class == 0x81)
  return sizeof(*smp) - sizeof(smp->route.dr.data);

 return sizeof(*smp) - sizeof(smp->route.lid.data);
}
# 10 "../include/rdma/opa_addr.h" 2
# 33 "../include/rdma/opa_addr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool ib_is_opa_gid(const union ib_gid *gid)
{
 return (((__u64)(__builtin_constant_p(( __u64)(__be64)(gid->global.interface_id)) ? ((__u64)( (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(gid->global.interface_id))) >> 40) ==
  (0x00066AULL));
}
# 46 "../include/rdma/opa_addr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 opa_get_lid_from_gid(const union ib_gid *gid)
{
 return (__u64)(__builtin_constant_p(( __u64)(__be64)(gid->global.interface_id)) ? ((__u64)( (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00000000000000ffULL) << 56) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x000000000000ff00ULL) << 40) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x0000000000ff0000ULL) << 24) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00000000ff000000ULL) << 8) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x000000ff00000000ULL) >> 8) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x0000ff0000000000ULL) >> 24) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0x00ff000000000000ULL) >> 40) | (((__u64)(( __u64)(__be64)(gid->global.interface_id)) & (__u64)0xff00000000000000ULL) >> 56))) : __fswab64(( __u64)(__be64)(gid->global.interface_id))) & 0xFFFFFFFF;
}
# 58 "../include/rdma/opa_addr.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool opa_is_extended_lid(__be32 dlid, __be32 slid)
{
 if (((__u32)(__builtin_constant_p(( __u32)(__be32)(dlid)) ? ((__u32)( (((__u32)(( __u32)(__be32)(dlid)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(dlid)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(dlid)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(dlid)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(dlid))) >=
      (__u16)(__builtin_constant_p(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) ? ((__u16)( (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))))) ||
     ((__u32)(__builtin_constant_p(( __u32)(__be32)(slid)) ? ((__u32)( (((__u32)(( __u32)(__be32)(slid)) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)(slid)) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)(slid)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)(slid)) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)(slid))) >=
      (__u16)(__builtin_constant_p(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) ? ((__u16)( (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))))))
  return true;

 return false;
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) u32 opa_get_mcast_base(u32 nr_top_bits)
{
 return ((__u32)(__builtin_constant_p(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF)))))) ? ((__u32)( (((__u32)(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF)))))) & (__u32)0x000000ffUL) << 24) | (((__u32)(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF)))))) & (__u32)0x0000ff00UL) << 8) | (((__u32)(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF)))))) & (__u32)0x00ff0000UL) >> 8) | (((__u32)(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF)))))) & (__u32)0xff000000UL) >> 24))) : __fswab32(( __u32)(__be32)((( __be32)(__u32)(__builtin_constant_p((0xFFFFFFFF)) ? ((__u32)( (((__u32)((0xFFFFFFFF)) & (__u32)0x000000ffUL) << 24) | (((__u32)((0xFFFFFFFF)) & (__u32)0x0000ff00UL) << 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0x00ff0000UL) >> 8) | (((__u32)((0xFFFFFFFF)) & (__u32)0xff000000UL) >> 24))) : __fswab32((0xFFFFFFFF))))))) << (32 - nr_top_bits));
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_is_valid_unicast_lid(struct rdma_ah_attr *attr)
{
 if (attr->type == RDMA_AH_ATTR_TYPE_IB) {
  if (!rdma_ah_get_dlid(attr) ||
      rdma_ah_get_dlid(attr) >=
      (__u16)(__builtin_constant_p(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) ? ((__u16)( (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0x00ffU) << 8) | (((__u16)(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000)))))) & (__u16)0xff00U) >> 8))) : __fswab16(( __u16)(__be16)((( __be16)(__u16)(__builtin_constant_p((0xC000)) ? ((__u16)( (((__u16)((0xC000)) & (__u16)0x00ffU) << 8) | (((__u16)((0xC000)) & (__u16)0xff00U) >> 8))) : __fswab16((0xC000))))))))
   return false;
 } else if (attr->type == RDMA_AH_ATTR_TYPE_OPA) {
  if (!rdma_ah_get_dlid(attr) ||
      rdma_ah_get_dlid(attr) >=
      opa_get_mcast_base(0x4))
   return false;
 }
 return true;
}
# 44 "../drivers/infiniband/core/core_priv.h" 2


# 1 "../drivers/infiniband/core/mad_priv.h" 1
# 65 "../drivers/infiniband/core/mad_priv.h"
struct ib_mad_list_head {
 struct list_head list;
 struct ib_cqe cqe;
 struct ib_mad_queue *mad_queue;
};

struct ib_mad_private_header {
 struct ib_mad_list_head mad_list;
 struct ib_mad_recv_wc recv_wc;
 struct ib_wc wc;
 u64 mapping;
} __attribute__((__packed__));

struct ib_mad_private {
 struct ib_mad_private_header header;
 size_t mad_size;
 struct ib_grh grh;
 u8 mad[];
} __attribute__((__packed__));

struct ib_rmpp_segment {
 struct list_head list;
 u32 num;
 u8 data[];
};

struct ib_mad_agent_private {
 struct ib_mad_agent agent;
 struct ib_mad_reg_req *reg_req;
 struct ib_mad_qp_info *qp_info;

 spinlock_t lock;
 struct list_head send_list;
 struct list_head wait_list;
 struct list_head done_list;
 struct delayed_work timed_work;
 unsigned long timeout;
 struct list_head local_list;
 struct work_struct local_work;
 struct list_head rmpp_list;

 refcount_t refcount;
 union {
  struct completion comp;
  struct callback_head rcu;
 };
};

struct ib_mad_snoop_private {
 struct ib_mad_agent agent;
 struct ib_mad_qp_info *qp_info;
 int snoop_index;
 int mad_snoop_flags;
 struct completion comp;
};

struct ib_mad_send_wr_private {
 struct ib_mad_list_head mad_list;
 struct list_head agent_list;
 struct ib_mad_agent_private *mad_agent_priv;
 struct ib_mad_send_buf send_buf;
 u64 header_mapping;
 u64 payload_mapping;
 struct ib_ud_wr send_wr;
 struct ib_sge sg_list[2];
 __be64 tid;
 unsigned long timeout;
 int max_retries;
 int retries_left;
 int retry;
 int refcount;
 enum ib_wc_status status;


 struct list_head rmpp_list;
 struct ib_rmpp_segment *last_ack_seg;
 struct ib_rmpp_segment *cur_seg;
 int last_ack;
 int seg_num;
 int newwin;
 int pad;
};

struct ib_mad_local_private {
 struct list_head completion_list;
 struct ib_mad_private *mad_priv;
 struct ib_mad_agent_private *recv_mad_agent;
 struct ib_mad_send_wr_private *mad_send_wr;
 size_t return_wc_byte_len;
};

struct ib_mad_mgmt_method_table {
 struct ib_mad_agent_private *agent[128];
};

struct ib_mad_mgmt_class_table {
 struct ib_mad_mgmt_method_table *method_table[80];
};

struct ib_mad_mgmt_vendor_class {
 u8 oui[8][3];
 struct ib_mad_mgmt_method_table *method_table[8];
};

struct ib_mad_mgmt_vendor_class_table {
 struct ib_mad_mgmt_vendor_class *vendor_class[(0x4F - 0x30 + 1)];
};

struct ib_mad_mgmt_version_table {
 struct ib_mad_mgmt_class_table *class;
 struct ib_mad_mgmt_vendor_class_table *vendor;
};

struct ib_mad_queue {
 spinlock_t lock;
 struct list_head list;
 int count;
 int max_active;
 struct ib_mad_qp_info *qp_info;
};

struct ib_mad_qp_info {
 struct ib_mad_port_private *port_priv;
 struct ib_qp *qp;
 struct ib_mad_queue send_queue;
 struct ib_mad_queue recv_queue;
 struct list_head overflow_list;
 spinlock_t snoop_lock;
 struct ib_mad_snoop_private **snoop_table;
 int snoop_table_size;
 atomic_t snoop_count;
};

struct ib_mad_port_private {
 struct list_head port_list;
 struct ib_device *device;
 int port_num;
 struct ib_cq *cq;
 struct ib_pd *pd;

 spinlock_t reg_lock;
 struct ib_mad_mgmt_version_table version[0x83];
 struct workqueue_struct *wq;
 struct ib_mad_qp_info qp_info[2];
};

int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr);

struct ib_mad_send_wr_private *
ib_find_send_mad(const struct ib_mad_agent_private *mad_agent_priv,
   const struct ib_mad_recv_wc *mad_recv_wc);

void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr,
        struct ib_mad_send_wc *mad_send_wc);

void ib_mark_mad_done(struct ib_mad_send_wr_private *mad_send_wr);

void ib_reset_mad_timeout(struct ib_mad_send_wr_private *mad_send_wr,
     unsigned long timeout_ms);
# 47 "../drivers/infiniband/core/core_priv.h" 2
# 1 "../drivers/infiniband/core/restrack.h" 1
# 15 "../drivers/infiniband/core/restrack.h"
struct rdma_restrack_root {



 struct xarray xa;



 u32 next_id;
};

int rdma_restrack_init(struct ib_device *dev);
void rdma_restrack_clean(struct ib_device *dev);
void rdma_restrack_add(struct rdma_restrack_entry *res);
void rdma_restrack_del(struct rdma_restrack_entry *res);
void rdma_restrack_new(struct rdma_restrack_entry *res,
         enum rdma_restrack_type type);
void rdma_restrack_set_name(struct rdma_restrack_entry *res,
       const char *caller);
void rdma_restrack_parent_name(struct rdma_restrack_entry *dst,
          const struct rdma_restrack_entry *parent);
# 48 "../drivers/infiniband/core/core_priv.h" 2




struct pkey_index_qp_list {
 struct list_head pkey_index_list;
 u16 pkey_index;

 spinlock_t qp_list_lock;
 struct list_head qp_list;
};







struct rdma_dev_net {
 struct sock *nl_sock;
 possible_net_t net;
 u32 id;
};

extern const struct attribute_group ib_dev_attr_group;
extern bool ib_devices_shared_netns;
extern unsigned int rdma_dev_net_id;

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) struct rdma_dev_net *rdma_net_to_dev_net(struct net *net)
{
 return net_generic(net, rdma_dev_net_id);
}

int ib_device_rename(struct ib_device *ibdev, const char *name);
int ib_device_set_dim(struct ib_device *ibdev, u8 use_dim);

typedef void (*roce_netdev_callback)(struct ib_device *device, u32 port,
       struct net_device *idev, void *cookie);

typedef bool (*roce_netdev_filter)(struct ib_device *device, u32 port,
       struct net_device *idev, void *cookie);

struct net_device *ib_device_get_netdev(struct ib_device *ib_dev,
     u32 port);

void ib_enum_roce_netdev(struct ib_device *ib_dev,
    roce_netdev_filter filter,
    void *filter_cookie,
    roce_netdev_callback cb,
    void *cookie);
void ib_enum_all_roce_netdevs(roce_netdev_filter filter,
         void *filter_cookie,
         roce_netdev_callback cb,
         void *cookie);

typedef int (*nldev_callback)(struct ib_device *device,
         struct sk_buff *skb,
         struct netlink_callback *cb,
         unsigned int idx);

int ib_enum_all_devs(nldev_callback nldev_cb, struct sk_buff *skb,
       struct netlink_callback *cb);

struct ib_client_nl_info {
 struct sk_buff *nl_msg;
 struct device *cdev;
 u32 port;
 u64 abi;
};
int ib_get_client_nl_info(struct ib_device *ibdev, const char *client_name,
     struct ib_client_nl_info *res);

enum ib_cache_gid_default_mode {
 IB_CACHE_GID_DEFAULT_MODE_SET,
 IB_CACHE_GID_DEFAULT_MODE_DELETE
};

int ib_cache_gid_parse_type_str(const char *buf);

const char *ib_cache_gid_type_str(enum ib_gid_type gid_type);

void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u32 port,
      struct net_device *ndev,
      unsigned long gid_type_mask,
      enum ib_cache_gid_default_mode mode);

int ib_cache_gid_add(struct ib_device *ib_dev, u32 port,
       union ib_gid *gid, struct ib_gid_attr *attr);

int ib_cache_gid_del(struct ib_device *ib_dev, u32 port,
       union ib_gid *gid, struct ib_gid_attr *attr);

int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u32 port,
         struct net_device *ndev);

int roce_gid_mgmt_init(void);
void roce_gid_mgmt_cleanup(void);

unsigned long roce_gid_type_mask_support(struct ib_device *ib_dev, u32 port);

int ib_cache_setup_one(struct ib_device *device);
void ib_cache_cleanup_one(struct ib_device *device);
void ib_cache_release_one(struct ib_device *device);
void ib_dispatch_event_clients(struct ib_event *event);
# 165 "../drivers/infiniband/core/core_priv.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_device_register_rdmacg(struct ib_device *device)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_device_unregister_rdmacg(struct ib_device *device)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_rdmacg_try_charge(struct ib_rdmacg_object *cg_obj,
           struct ib_device *device,
           enum rdmacg_resource_type resource_index)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_rdmacg_uncharge(struct ib_rdmacg_object *cg_obj,
          struct ib_device *device,
          enum rdmacg_resource_type resource_index)
{
}


static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) bool rdma_is_upper_dev_rcu(struct net_device *dev,
      struct net_device *upper)
{
 return netdev_has_upper_dev_all_rcu(dev, upper);
}

int addr_init(void);
void addr_cleanup(void);

int ib_mad_init(void);
void ib_mad_cleanup(void);

int ib_sa_init(void);
void ib_sa_cleanup(void);

void rdma_nl_init(void);
void rdma_nl_exit(void);

int ib_nl_handle_resolve_resp(struct sk_buff *skb,
         struct nlmsghdr *nlh,
         struct netlink_ext_ack *extack);
int ib_nl_handle_set_timeout(struct sk_buff *skb,
        struct nlmsghdr *nlh,
        struct netlink_ext_ack *extack);
int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
        struct nlmsghdr *nlh,
        struct netlink_ext_ack *extack);

void ib_get_cached_subnet_prefix(struct ib_device *device,
    u32 port_num,
    u64 *sn_pfx);
# 243 "../drivers/infiniband/core/core_priv.h"
static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_security_release_port_pkey_list(struct ib_device *device)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_security_cache_change(struct ib_device *device,
         u32 port_num,
         u64 subnet_prefix)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_security_modify_qp(struct ib_qp *qp,
     struct ib_qp_attr *qp_attr,
     int qp_attr_mask,
     struct ib_udata *udata)
{
 return qp->device->ops.modify_qp(qp->real_qp,
      qp_attr,
      qp_attr_mask,
      udata);
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_create_qp_security(struct ib_qp *qp,
     struct ib_device *dev)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_destroy_qp_security_begin(struct ib_qp_security *sec)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_destroy_qp_security_abort(struct ib_qp_security *sec)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_destroy_qp_security_end(struct ib_qp_security *sec)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_open_shared_qp_security(struct ib_qp *qp,
          struct ib_device *dev)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_close_shared_qp_security(struct ib_qp_security *sec)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
           enum ib_qp_type qp_type)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
{
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) int ib_mad_enforce_security(struct ib_mad_agent_private *map,
       u16 pkey_index)
{
 return 0;
}

static inline __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) void ib_mad_agent_security_change(void)
{
}


struct ib_device *ib_device_get_by_index(const struct net *net, u32 index);


void nldev_init(void);
void nldev_exit(void);

struct ib_qp *ib_create_qp_user(struct ib_device *dev, struct ib_pd *pd,
    struct ib_qp_init_attr *attr,
    struct ib_udata *udata,
    struct ib_uqp_object *uobj, const char *caller);

void ib_qp_usecnt_inc(struct ib_qp *qp);
void ib_qp_usecnt_dec(struct ib_qp *qp);

struct rdma_dev_addr;
int rdma_resolve_ip_route(struct sockaddr *src_addr,
     const struct sockaddr *dst_addr,
     struct rdma_dev_addr *addr);

int rdma_addr_find_l2_eth_by_grh(const union ib_gid *sgid,
     const union ib_gid *dgid,
     u8 *dmac, const struct ib_gid_attr *sgid_attr,
     int *hoplimit);
void rdma_copy_src_l2_addr(struct rdma_dev_addr *dev_addr,
      const struct net_device *dev);

struct sa_path_rec;
int roce_resolve_route_from_path(struct sa_path_rec *rec,
     const struct ib_gid_attr *attr);

struct net_device *rdma_read_gid_attr_ndev_rcu(const struct ib_gid_attr *attr);

void ib_free_port_attrs(struct ib_core_device *coredev);
int ib_setup_port_attrs(struct ib_core_device *coredev);
struct rdma_hw_stats *ib_get_hw_stats_port(struct ib_device *ibdev, u32 port_num);
void ib_device_release_hw_stats(struct hw_stats_device_data *data);
int ib_setup_device_attrs(struct ib_device *ibdev);

int rdma_compatdev_set(u8 enable);

int ib_port_register_client_groups(struct ib_device *ibdev, u32 port_num,
       const struct attribute_group **groups);
void ib_port_unregister_client_groups(struct ib_device *ibdev, u32 port_num,
         const struct attribute_group **groups);

int ib_device_set_netns_put(struct sk_buff *skb,
       struct ib_device *dev, u32 ns_fd);

int rdma_nl_net_init(struct rdma_dev_net *rnet);
void rdma_nl_net_exit(struct rdma_dev_net *rnet);

struct rdma_umap_priv {
 struct vm_area_struct *vma;
 struct list_head list;
 struct rdma_user_mmap_entry *entry;
};

void rdma_umap_priv_init(struct rdma_umap_priv *priv,
    struct vm_area_struct *vma,
    struct rdma_user_mmap_entry *entry);

void ib_cq_pool_cleanup(struct ib_device *dev);

bool rdma_nl_get_privileged_qkey(void);
# 10 "../drivers/infiniband/core/ib_core_uverbs.c" 2
# 30 "../drivers/infiniband/core/ib_core_uverbs.c"
void rdma_umap_priv_init(struct rdma_umap_priv *priv,
    struct vm_area_struct *vma,
    struct rdma_user_mmap_entry *entry)
{
 struct ib_uverbs_file *ufile = vma->vm_file->private_data;

 priv->vma = vma;
 if (entry) {
  kref_get(&entry->ref);
  priv->entry = entry;
 }
 vma->vm_private_data = priv;


 mutex_lock_nested(&ufile->umap_lock, 0);
 list_add(&priv->list, &ufile->umaps);
 mutex_unlock(&ufile->umap_lock);
}
extern typeof(rdma_umap_priv_init) rdma_umap_priv_init; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_umap_priv_init562 = (void *)(uintptr_t)&rdma_umap_priv_init; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_umap_priv_init: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_umap_priv_init ; .previous");
# 67 "../drivers/infiniband/core/ib_core_uverbs.c"
int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
        unsigned long pfn, unsigned long size, pgprot_t prot,
        struct rdma_user_mmap_entry *entry)
{
 struct ib_uverbs_file *ufile = ucontext->ufile;
 struct rdma_umap_priv *priv;

 if (!(vma->vm_flags & 0x00000008))
  return -22;

 if (vma->vm_end - vma->vm_start != size)
  return -22;


 if (({ int __ret_warn_on = !!(!vma->vm_file || vma->vm_file->private_data != ufile); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("drivers/infiniband/core/ib_core_uverbs.c", 82, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }))

  return -22;
 do { ({ int __ret_warn_on = !!(debug_locks && !(lock_is_held(&(&ufile->device->disassociate_srcu)->dep_map) != 0)); if (__builtin_expect(!!(__ret_warn_on), 0)) do { do { } while(0); warn_slowpath_fmt("drivers/infiniband/core/ib_core_uverbs.c", 84, 9, ((void *)0)); do { } while(0); } while (0); __builtin_expect(!!(__ret_warn_on), 0); }); } while (0);

 priv = ({ ; ({ struct alloc_tag * __attribute__((__unused__)) _old = ((void *)0); typeof(kzalloc_noprof(sizeof(*priv), ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT)))))) _res = kzalloc_noprof(sizeof(*priv), ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT))))); do {} while (0); _res; }); });
 if (!priv)
  return -12;

 vma->vm_page_prot = prot;
 if (io_remap_pfn_range(vma, vma->vm_start, pfn, size, prot)) {
  kfree(priv);
  return -11;
 }

 rdma_umap_priv_init(priv, vma, entry);
 return 0;
}
extern typeof(rdma_user_mmap_io) rdma_user_mmap_io; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_io563 = (void *)(uintptr_t)&rdma_user_mmap_io; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_io: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_io ; .previous");
# 116 "../drivers/infiniband/core/ib_core_uverbs.c"
struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get_pgoff(struct ib_ucontext *ucontext,
          unsigned long pgoff)
{
 struct rdma_user_mmap_entry *entry;

 if (pgoff > ((u32)~0U))
  return ((void *)0);

 spin_lock(&(&ucontext->mmap_xa)->xa_lock);

 entry = xa_load(&ucontext->mmap_xa, pgoff);






 if (!entry || entry->start_pgoff != pgoff || entry->driver_removed ||
     !kref_get_unless_zero(&entry->ref))
  goto err;

 spin_unlock(&(&ucontext->mmap_xa)->xa_lock);

 ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#zx] returned\n",
    pgoff, entry->npages);

 return entry;

err:
 spin_unlock(&(&ucontext->mmap_xa)->xa_lock);
 return ((void *)0);
}
extern typeof(rdma_user_mmap_entry_get_pgoff) rdma_user_mmap_entry_get_pgoff; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_get_pgoff564 = (void *)(uintptr_t)&rdma_user_mmap_entry_get_pgoff; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_get_pgoff: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_get_pgoff ; .previous");
# 160 "../drivers/infiniband/core/ib_core_uverbs.c"
struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
    struct vm_area_struct *vma)
{
 struct rdma_user_mmap_entry *entry;

 if (!(vma->vm_flags & 0x00000008))
  return ((void *)0);
 entry = rdma_user_mmap_entry_get_pgoff(ucontext, vma->vm_pgoff);
 if (!entry)
  return ((void *)0);
 if (entry->npages * (1UL << 14) != vma->vm_end - vma->vm_start) {
  rdma_user_mmap_entry_put(entry);
  return ((void *)0);
 }
 return entry;
}
extern typeof(rdma_user_mmap_entry_get) rdma_user_mmap_entry_get; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_get565 = (void *)(uintptr_t)&rdma_user_mmap_entry_get; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_get: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_get ; .previous");

static void rdma_user_mmap_entry_free(struct kref *kref)
{
 struct rdma_user_mmap_entry *entry =
  ({ void *__mptr = (void *)(kref); _Static_assert(__builtin_types_compatible_p(typeof(*(kref)), typeof(((struct rdma_user_mmap_entry *)0)->ref)) || __builtin_types_compatible_p(typeof(*(kref)), typeof(void)), "pointer type mismatch in container_of()"); ((struct rdma_user_mmap_entry *)(__mptr - __builtin_offsetof(struct rdma_user_mmap_entry, ref))); });
 struct ib_ucontext *ucontext = entry->ucontext;
 unsigned long i;





 spin_lock(&(&ucontext->mmap_xa)->xa_lock);
 for (i = 0; i < entry->npages; i++)
  __xa_erase(&ucontext->mmap_xa, entry->start_pgoff + i);
 spin_unlock(&(&ucontext->mmap_xa)->xa_lock);

 ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#zx] removed\n",
    entry->start_pgoff, entry->npages);

 if (ucontext->device->ops.mmap_free)
  ucontext->device->ops.mmap_free(entry);
}
# 214 "../drivers/infiniband/core/ib_core_uverbs.c"
void rdma_user_mmap_entry_put(struct rdma_user_mmap_entry *entry)
{
 kref_put(&entry->ref, rdma_user_mmap_entry_free);
}
extern typeof(rdma_user_mmap_entry_put) rdma_user_mmap_entry_put; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_put566 = (void *)(uintptr_t)&rdma_user_mmap_entry_put; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_put: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_put ; .previous");
# 230 "../drivers/infiniband/core/ib_core_uverbs.c"
void rdma_user_mmap_entry_remove(struct rdma_user_mmap_entry *entry)
{
 if (!entry)
  return;

 spin_lock(&(&entry->ucontext->mmap_xa)->xa_lock);
 entry->driver_removed = true;
 spin_unlock(&(&entry->ucontext->mmap_xa)->xa_lock);
 kref_put(&entry->ref, rdma_user_mmap_entry_free);
}
extern typeof(rdma_user_mmap_entry_remove) rdma_user_mmap_entry_remove; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_remove567 = (void *)(uintptr_t)&rdma_user_mmap_entry_remove; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_remove: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_remove ; .previous");
# 262 "../drivers/infiniband/core/ib_core_uverbs.c"
int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext,
          struct rdma_user_mmap_entry *entry,
          size_t length, u32 min_pgoff,
          u32 max_pgoff)
{
 struct ib_uverbs_file *ufile = ucontext->ufile;
 struct xa_state xas = { .xa = &ucontext->mmap_xa, .xa_index = min_pgoff, .xa_shift = 0, .xa_sibs = 0, .xa_offset = 0, .xa_pad = 0, .xa_node = ((struct xa_node *)3UL), .xa_alloc = ((void *)0), .xa_update = ((void *)0), .xa_lru = ((void *)0), };
 u32 xa_first, xa_last, npages;
 int err;
 u32 i;

 if (!entry)
  return -22;

 kref_init(&entry->ref);
 entry->ucontext = ucontext;







 mutex_lock_nested(&ufile->umap_lock, 0);

 spin_lock(&(&ucontext->mmap_xa)->xa_lock);


 npages = (u32)(((length) + ((1UL << 14)) - 1) / ((1UL << 14)));
 entry->npages = npages;
 while (true) {

  xas_find_marked(&xas, max_pgoff, (( xa_mark_t)0U));
  if (xas.xa_node == ((struct xa_node *)3UL))
   goto err_unlock;

  xa_first = xas.xa_index;


  if (__must_check_overflow(__builtin_add_overflow(xa_first, npages, &xa_last)))
   goto err_unlock;





  xas_next_entry(&xas, xa_last - 1);
  if (xas.xa_node == ((struct xa_node *)1UL) || xas.xa_index >= xa_last)
   break;
 }

 for (i = xa_first; i < xa_last; i++) {
  err = __xa_insert(&ucontext->mmap_xa, i, entry, ((( gfp_t)(((((1UL))) << (___GFP_DIRECT_RECLAIM_BIT))|((((1UL))) << (___GFP_KSWAPD_RECLAIM_BIT)))) | (( gfp_t)((((1UL))) << (___GFP_IO_BIT))) | (( gfp_t)((((1UL))) << (___GFP_FS_BIT)))));
  if (err)
   goto err_undo;
 }





 entry->start_pgoff = xa_first;
 spin_unlock(&(&ucontext->mmap_xa)->xa_lock);
 mutex_unlock(&ufile->umap_lock);

 ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#x] inserted\n",
    entry->start_pgoff, npages);

 return 0;

err_undo:
 for (; i > xa_first; i--)
  __xa_erase(&ucontext->mmap_xa, i - 1);

err_unlock:
 spin_unlock(&(&ucontext->mmap_xa)->xa_lock);
 mutex_unlock(&ufile->umap_lock);
 return -12;
}
extern typeof(rdma_user_mmap_entry_insert_range) rdma_user_mmap_entry_insert_range; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_insert_range568 = (void *)(uintptr_t)&rdma_user_mmap_entry_insert_range; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_insert_range: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_insert_range ; .previous");
# 360 "../drivers/infiniband/core/ib_core_uverbs.c"
int rdma_user_mmap_entry_insert(struct ib_ucontext *ucontext,
    struct rdma_user_mmap_entry *entry,
    size_t length)
{
 return rdma_user_mmap_entry_insert_range(ucontext, entry, length, 0,
       ((u32)~0U));
}
extern typeof(rdma_user_mmap_entry_insert) rdma_user_mmap_entry_insert; static void * __attribute__((__used__)) __attribute__((__section__(".discard.addressable"))) __UNIQUE_ID___addressable_rdma_user_mmap_entry_insert569 = (void *)(uintptr_t)&rdma_user_mmap_entry_insert; asm(".section \".export_symbol\",\"a\" ; __export_symbol_rdma_user_mmap_entry_insert: ; .asciz \"\" ; .asciz \"\" ; .balign 4 ; .long rdma_user_mmap_entry_insert ; .previous");

[-- Attachment #3: uverbs_compile.txt --]
[-- Type: text/plain, Size: 3480 bytes --]

clang version 18.1.8
Target: hexagon-unknown-linux-musl
Thread model: posix
InstalledDir: /local/mnt/workspace/install/clang-latest/bin
 (in-process)
 "/local/mnt/workspace/install/clang+llvm-18.1.8-x86_64-linux-gnu-ubuntu-18.04/bin/clang-18" "-cc1" "-triple" "hexagon-unknown-linux-musl" "-emit-obj" "-disable-free" "-clear-ast-before-backend" "-disable-llvm-verifier" "-discard-value-names" "-main-file-name" "ib_core_uverbs.c" "-mrelocation-model" "static" "-fno-delete-null-pointer-checks" "-mframe-pointer=all" "-relaxed-aliasing" "-fno-optimize-sibling-calls" "-ffp-contract=on" "-fno-rounding-math" "-mconstructor-aliases" "-target-feature" "+reserved-r19" "-target-cpu" "hexagonv60" "-target-feature" "+long-calls" "-mqdsp6-compat" "-Wreturn-type" "-mllvm" "-hexagon-small-data-threshold=0" "-mllvm" "-machine-sink-split=0" "-debug-info-kind=constructor" "-dwarf-version=5" "-debugger-tuning=gdb" "-fdebug-compilation-dir=/local/mnt/workspace/kernel/bcain_kernel/build_ib_dir" "-fcoverage-compilation-dir=/local/mnt/workspace/kernel/bcain_kernel/build_ib_dir" "-nostdsysteminc" "-nobuiltininc" "-resource-dir" "/local/mnt/workspace/install/clang+llvm-18.1.8-x86_64-linux-gnu-ubuntu-18.04/lib/clang/18" "-dependency-file" "drivers/infiniband/core/.ib_core_uverbs.o.d" "-MT" "drivers/infiniband/core/ib_core_uverbs.o" "-include" "../include/linux/compiler-version.h" "-include" "../include/linux/kconfig.h" "-include" "../include/linux/compiler_types.h" "-I" "../arch/hexagon/include" "-I" "./arch/hexagon/include/generated" "-I" "../include" "-I" "./include" "-I" "../arch/hexagon/include/uapi" "-I" "./arch/hexagon/include/generated/uapi" "-I" "../include/uapi" "-I" "./include/generated/uapi" "-D" "__KERNEL__" "-D" "THREADINFO_REG=r19" "-D" "__linux__" "-I" "../drivers/infiniband/core" "-I" "drivers/infiniband/core" "-D" "KBUILD_MODFILE=\"drivers/infiniband/core/ib_core\"" "-D" "KBUILD_BASENAME=\"ib_core_uverbs\"" "-D" "KBUILD_MODNAME=\"ib_core\"" "-D" "__KBUILD_MODNAME=kmod_ib_core" "-fmacro-prefix-map=../=" "-O2" "-Werror=unknown-warning-option" "-Werror=ignored-optimization-argument" "-Werror=option-ignored" "-Werror=unused-command-line-argument" "-Wall" "-Wundef" "-Werror=implicit-function-declaration" "-Werror=implicit-int" "-Werror=return-type" "-Werror=strict-prototypes" "-Wno-format-security" "-Wno-trigraphs" "-Wno-frame-address" "-Wno-address-of-packed-member" "-Wmissing-declarations" "-Wmissing-prototypes" "-Wno-gnu" "-Wvla" "-Wno-pointer-sign" "-Wcast-function-type" "-Wimplicit-fallthrough" "-Werror=date-time" "-Werror=incompatible-pointer-types" "-Wenum-conversion" "-Wextra" "-Wunused" "-Wno-unused-but-set-variable" "-Wno-unused-const-variable" "-Wno-format-overflow" "-Wno-format-overflow-non-kprintf" "-Wno-format-truncation-non-kprintf" "-Wno-override-init" "-Wno-pointer-to-enum-cast" "-Wno-tautological-constant-out-of-range-compare" "-Wno-unaligned-access" "-Wno-enum-compare-conditional" "-Wno-enum-enum-conversion" "-Wno-missing-field-initializers" "-Wno-type-limits" "-Wno-shift-negative-value" "-Wno-sign-compare" "-Wno-unused-parameter" "-std=gnu11" "-ferror-limit" "19" "-fwrapv" "-fstrict-flex-arrays=3" "-ftrivial-auto-var-init=zero" "-fno-signed-char" "-fwchar-type=short" "-fno-signed-wchar" "-fgnuc-version=4.2.1" "-fskip-odr-check-in-gmf" "-vectorize-loops" "-vectorize-slp" "-faddrsig" "-D__GCC_HAVE_DWARF2_CFI_ASM=1" "-o" "drivers/infiniband/core/ib_core_uverbs.o" "-x" "c" "../drivers/infiniband/core/ib_core_uverbs.c"

[-- Attachment #4: rec_layouts_ib_core_uverbs.txt --]
[-- Type: text/plain, Size: 1250186 bytes --]


*** Dumping AST Record Layout
         0 | struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:196:3)
         0 |   unsigned long correct
         4 |   unsigned long incorrect
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:200:3)
         0 |   unsigned long miss
         4 |   unsigned long hit
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:195:2)
         0 |   struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:196:3) 
         0 |     unsigned long correct
         4 |     unsigned long incorrect
         0 |   struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:200:3) 
         0 |     unsigned long miss
         4 |     unsigned long hit
         0 |   unsigned long[2] miss_hit
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ftrace_branch_data
         0 |   const char * func
         4 |   const char * file
         8 |   unsigned int line
        12 |   union ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:195:2) 
        12 |     struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:196:3) 
        12 |       unsigned long correct
        16 |       unsigned long incorrect
        12 |     struct ftrace_branch_data::(anonymous at ./../include/linux/compiler_types.h:200:3) 
        12 |       unsigned long miss
        16 |       unsigned long hit
        12 |     unsigned long[2] miss_hit
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | atomic_t
         0 |   int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | atomic64_t
         0 |   s64 counter
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:65:17)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:95:27)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:126:28)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:156:29)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:184:18)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:205:31)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:234:32)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:260:42)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:287:45)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:317:54)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:341:41)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:367:50)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:388:32)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/find.h:409:31)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/bitmap.h:457:11)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/bitmap.h:473:12)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct taint_flag
         0 |   char c_true
         1 |   char c_false
         2 |   bool module
         4 |   const char * desc
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct obs_kernel_param
         0 |   const char * str
         4 |   int (*)(char *) setup_func
         8 |   int early
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct lockdep_subclass_key
         0 |   char __one_byte
           | [sizeof=1, align=1]

*** Dumping AST Record Layout
         0 | struct hlist_node
         0 |   struct hlist_node * next
         4 |   struct hlist_node ** pprev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2)
         0 |   struct hlist_node hash_entry
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         0 |   struct lockdep_subclass_key[8] subkeys
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | arch_spinlock_t
         0 |   volatile unsigned int slock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct lockdep_map
         0 |   struct lock_class_key * key
         4 |   struct lock_class *[2] class_cache
        12 |   const char * name
        16 |   u8 wait_type_outer
        17 |   u8 wait_type_inner
        18 |   u8 lock_type
        20 |   int cpu
        24 |   unsigned long ip
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct raw_spinlock
         0 |   arch_spinlock_t raw_lock
         0 |     volatile unsigned int slock
         4 |   unsigned int magic
         8 |   unsigned int owner_cpu
        12 |   void * owner
        16 |   struct lockdep_map dep_map
        16 |     struct lock_class_key * key
        20 |     struct lock_class *[2] class_cache
        28 |     const char * name
        32 |     u8 wait_type_outer
        33 |     u8 wait_type_inner
        34 |     u8 lock_type
        36 |     int cpu
        40 |     unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct restart_block::(unnamed at ../include/linux/restart_block.h:30:3)
         0 |   u32 * uaddr
         4 |   u32 val
         8 |   u32 flags
        12 |   u32 bitset
        16 |   u64 time
        24 |   u32 * uaddr2
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:30:3)
         0 |   unsigned long usr
         4 |   unsigned long preds
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:37:3)
         0 |   unsigned long m0
         4 |   unsigned long m1
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:44:3)
         0 |   unsigned long sa1
         4 |   unsigned long lc1
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:51:3)
         0 |   unsigned long sa0
         4 |   unsigned long lc0
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:58:3)
         0 |   unsigned long ugp
         4 |   unsigned long gp
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:65:3)
         0 |   unsigned long cs0
         4 |   unsigned long cs1
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:78:3)
         0 |   unsigned long r00
         4 |   unsigned long r01
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:85:3)
         0 |   unsigned long r02
         4 |   unsigned long r03
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:92:3)
         0 |   unsigned long r04
         4 |   unsigned long r05
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:99:3)
         0 |   unsigned long r06
         4 |   unsigned long r07
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:106:3)
         0 |   unsigned long r08
         4 |   unsigned long r09
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:113:9)
         0 |   unsigned long r10
         4 |   unsigned long r11
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:120:9)
         0 |   unsigned long r12
         4 |   unsigned long r13
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:127:9)
         0 |   unsigned long r14
         4 |   unsigned long r15
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:134:3)
         0 |   unsigned long r16
         4 |   unsigned long r17
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:141:3)
         0 |   unsigned long r18
         4 |   unsigned long r19
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:148:3)
         0 |   unsigned long r20
         4 |   unsigned long r21
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:155:3)
         0 |   unsigned long r22
         4 |   unsigned long r23
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:162:3)
         0 |   unsigned long r24
         4 |   unsigned long r25
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:169:3)
         0 |   unsigned long r26
         4 |   unsigned long r27
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:176:3)
         0 |   unsigned long r28
         4 |   unsigned long r29
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pt_regs::(anonymous at ../arch/hexagon/include/uapi/asm/registers.h:183:3)
         0 |   unsigned long r30
         4 |   unsigned long r31
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:81:3)
         0 |   unsigned long r16
         4 |   unsigned long r17
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:88:3)
         0 |   unsigned long r18
         4 |   unsigned long r19
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:95:3)
         0 |   unsigned long r20
         4 |   unsigned long r21
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:102:3)
         0 |   unsigned long r22
         4 |   unsigned long r23
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:109:3)
         0 |   unsigned long r24
         4 |   unsigned long r25
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:116:3)
         0 |   unsigned long r26
         4 |   unsigned long r27
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:80:2)
         0 |   struct hexagon_switch_stack::(anonymous at ../arch/hexagon/include/asm/processor.h:81:3) 
         0 |     unsigned long r16
         4 |     unsigned long r17
         0 |   unsigned long long r1716
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct thread_info
         0 |   struct task_struct * task
         4 |   unsigned long flags
         8 |   __u32 cpu
        12 |   int preempt_count
        16 |   struct pt_regs * regs
        20 |   unsigned long sp
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | class_preempt_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_preempt_notrace_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_migrate_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_irq_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_irqsave_t
         0 |   void * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct list_head
         0 |   struct list_head * next
         4 |   struct list_head * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct cpumask
         0 |   unsigned long[1] bits
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct llist_node
         0 |   struct llist_node * next
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct llist_head
         0 |   struct llist_node * first
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union __call_single_node::(anonymous at ../include/linux/smp_types.h:60:2)
         0 |   unsigned int u_flags
         0 |   atomic_t a_flags
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct __call_single_node
         0 |   struct llist_node llist
         0 |     struct llist_node * next
         4 |   union __call_single_node::(anonymous at ../include/linux/smp_types.h:60:2) 
         4 |     unsigned int u_flags
         4 |     atomic_t a_flags
         4 |       int counter
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct __call_single_data
         0 |   struct __call_single_node node
         0 |     struct llist_node llist
         0 |       struct llist_node * next
         4 |     union __call_single_node::(anonymous at ../include/linux/smp_types.h:60:2) 
         4 |       unsigned int u_flags
         4 |       atomic_t a_flags
         4 |         int counter
         8 |   smp_call_func_t func
        12 |   void * info
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct lock_class_key
         0 |   union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2) 
         0 |     struct hlist_node hash_entry
         0 |       struct hlist_node * next
         4 |       struct hlist_node ** pprev
         0 |     struct lockdep_subclass_key[8] subkeys
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3)
         0 |   u8[16] __padding
        16 |   struct lockdep_map dep_map
        16 |     struct lock_class_key * key
        20 |     struct lock_class *[2] class_cache
        28 |     const char * name
        32 |     u8 wait_type_outer
        33 |     u8 wait_type_inner
        34 |     u8 lock_type
        36 |     int cpu
        40 |     unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2)
         0 |   struct raw_spinlock rlock
         0 |     arch_spinlock_t raw_lock
         0 |       volatile unsigned int slock
         4 |     unsigned int magic
         8 |     unsigned int owner_cpu
        12 |     void * owner
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
         0 |   struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |     u8[16] __padding
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | arch_rwlock_t
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | rwlock_t
         0 |   arch_rwlock_t raw_lock
         0 |   unsigned int magic
         4 |   unsigned int owner_cpu
         8 |   void * owner
        12 |   struct lockdep_map dep_map
        12 |     struct lock_class_key * key
        16 |     struct lock_class *[2] class_cache
        24 |     const char * name
        28 |     u8 wait_type_outer
        29 |     u8 wait_type_inner
        30 |     u8 lock_type
        32 |     int cpu
        36 |     unsigned long ip
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct spinlock
         0 |   union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |     struct raw_spinlock rlock
         0 |       arch_spinlock_t raw_lock
         0 |         volatile unsigned int slock
         4 |       unsigned int magic
         8 |       unsigned int owner_cpu
        12 |       void * owner
        16 |       struct lockdep_map dep_map
        16 |         struct lock_class_key * key
        20 |         struct lock_class *[2] class_cache
        28 |         const char * name
        32 |         u8 wait_type_outer
        33 |         u8 wait_type_inner
        34 |         u8 lock_type
        36 |         int cpu
        40 |         unsigned long ip
         0 |     struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |       u8[16] __padding
        16 |       struct lockdep_map dep_map
        16 |         struct lock_class_key * key
        20 |         struct lock_class *[2] class_cache
        28 |         const char * name
        32 |         u8 wait_type_outer
        33 |         u8 wait_type_inner
        34 |         u8 lock_type
        36 |         int cpu
        40 |         unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | class_raw_spinlock_t
         0 |   raw_spinlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_raw_spinlock_nested_t
         0 |   raw_spinlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_raw_spinlock_irq_t
         0 |   raw_spinlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_raw_spinlock_irqsave_t
         0 |   raw_spinlock_t * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | class_spinlock_t
         0 |   spinlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_spinlock_irq_t
         0 |   spinlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_spinlock_irqsave_t
         0 |   spinlock_t * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | class_read_lock_t
         0 |   rwlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_read_lock_irq_t
         0 |   rwlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_read_lock_irqsave_t
         0 |   rwlock_t * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | class_write_lock_t
         0 |   rwlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_write_lock_irq_t
         0 |   rwlock_t * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | class_write_lock_irqsave_t
         0 |   rwlock_t * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hlist_nulls_node
         0 |   struct hlist_nulls_node * next
         4 |   struct hlist_nulls_node ** pprev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct wait_queue_head
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   struct list_head head
        44 |     struct list_head * next
        48 |     struct list_head * prev
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct wait_queue_entry
         0 |   unsigned int flags
         4 |   void * private
         8 |   wait_queue_func_t func
        12 |   struct list_head entry
        12 |     struct list_head * next
        16 |     struct list_head * prev
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct mutex
         0 |   atomic_t owner
         0 |     int counter
         4 |   struct raw_spinlock wait_lock
         4 |     arch_spinlock_t raw_lock
         4 |       volatile unsigned int slock
         8 |     unsigned int magic
        12 |     unsigned int owner_cpu
        16 |     void * owner
        20 |     struct lockdep_map dep_map
        20 |       struct lock_class_key * key
        24 |       struct lock_class *[2] class_cache
        32 |       const char * name
        36 |       u8 wait_type_outer
        37 |       u8 wait_type_inner
        38 |       u8 lock_type
        40 |       int cpu
        44 |       unsigned long ip
        48 |   struct list_head wait_list
        48 |     struct list_head * next
        52 |     struct list_head * prev
        56 |   void * magic
        60 |   struct lockdep_map dep_map
        60 |     struct lock_class_key * key
        64 |     struct lock_class *[2] class_cache
        72 |     const char * name
        76 |     u8 wait_type_outer
        77 |     u8 wait_type_inner
        78 |     u8 lock_type
        80 |     int cpu
        84 |     unsigned long ip
           | [sizeof=88, align=4]

*** Dumping AST Record Layout
         0 | struct seqcount
         0 |   unsigned int sequence
         4 |   struct lockdep_map dep_map
         4 |     struct lock_class_key * key
         8 |     struct lock_class *[2] class_cache
        16 |     const char * name
        20 |     u8 wait_type_outer
        21 |     u8 wait_type_inner
        22 |     u8 lock_type
        24 |     int cpu
        28 |     unsigned long ip
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct seqcount_spinlock
         0 |   struct seqcount seqcount
         0 |     unsigned int sequence
         4 |     struct lockdep_map dep_map
         4 |       struct lock_class_key * key
         8 |       struct lock_class *[2] class_cache
        16 |       const char * name
        20 |       u8 wait_type_outer
        21 |       u8 wait_type_inner
        22 |       u8 lock_type
        24 |       int cpu
        28 |       unsigned long ip
        32 |   spinlock_t * lock
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | seqlock_t
         0 |   struct seqcount_spinlock seqcount
         0 |     struct seqcount seqcount
         0 |       unsigned int sequence
         4 |       struct lockdep_map dep_map
         4 |         struct lock_class_key * key
         8 |         struct lock_class *[2] class_cache
        16 |         const char * name
        20 |         u8 wait_type_outer
        21 |         u8 wait_type_inner
        22 |         u8 lock_type
        24 |         int cpu
        28 |         unsigned long ip
        32 |     spinlock_t * lock
        36 |   struct spinlock lock
        36 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        36 |       struct raw_spinlock rlock
        36 |         arch_spinlock_t raw_lock
        36 |           volatile unsigned int slock
        40 |         unsigned int magic
        44 |         unsigned int owner_cpu
        48 |         void * owner
        52 |         struct lockdep_map dep_map
        52 |           struct lock_class_key * key
        56 |           struct lock_class *[2] class_cache
        64 |           const char * name
        68 |           u8 wait_type_outer
        69 |           u8 wait_type_inner
        70 |           u8 lock_type
        72 |           int cpu
        76 |           unsigned long ip
        36 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        36 |         u8[16] __padding
        52 |         struct lockdep_map dep_map
        52 |           struct lock_class_key * key
        56 |           struct lock_class *[2] class_cache
        64 |           const char * name
        68 |           u8 wait_type_outer
        69 |           u8 wait_type_inner
        70 |           u8 lock_type
        72 |           int cpu
        76 |           unsigned long ip
           | [sizeof=80, align=4]

*** Dumping AST Record Layout
         0 | struct static_key
         0 |   atomic_t enabled
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | nodemask_t
         0 |   unsigned long[1] bits
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct refcount_struct
         0 |   atomic_t refs
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct kref
         0 |   struct refcount_struct refcount
         0 |     atomic_t refs
         0 |       int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct rb_root
         0 |   struct rb_node * rb_node
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct callback_head
         0 |   struct callback_head * next
         4 |   void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | class_rcu_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct rb_node
         0 |   unsigned long __rb_parent_color
         4 |   struct rb_node * rb_right
         8 |   struct rb_node * rb_left
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct rb_root_cached
         0 |   struct rb_root rb_root
         0 |     struct rb_node * rb_node
         4 |   struct rb_node * rb_leftmost
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union maple_tree::(anonymous at ../include/linux/maple_tree.h:220:2)
         0 |   struct spinlock ma_lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |   lockdep_map_p ma_external_lock
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct maple_node::(anonymous at ../include/linux/maple_tree.h:282:3)
         0 |   struct maple_pnode * parent
         4 |   void *[63] slot
           | [sizeof=256, align=4]

*** Dumping AST Record Layout
         0 | struct maple_node::(anonymous at ../include/linux/maple_tree.h:286:3)
         0 |   void * pad
         4 |   struct callback_head rcu
         4 |     struct callback_head * next
         8 |     void (*)(struct callback_head *) func
        12 |   struct maple_enode * piv_parent
        16 |   unsigned char parent_slot
        20 |   enum maple_type type
        24 |   unsigned char slot_len
        28 |   unsigned int ma_flags
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct maple_metadata
         0 |   unsigned char end
         1 |   unsigned char gap
           | [sizeof=2, align=1]

*** Dumping AST Record Layout
         0 | struct maple_range_64::(anonymous at ../include/linux/maple_tree.h:108:3)
         0 |   void *[31] pad
       124 |   struct maple_metadata meta
       124 |     unsigned char end
       125 |     unsigned char gap
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | union maple_range_64::(anonymous at ../include/linux/maple_tree.h:106:2)
         0 |   void *[32] slot
         0 |   struct maple_range_64::(anonymous at ../include/linux/maple_tree.h:108:3) 
         0 |     void *[31] pad
       124 |     struct maple_metadata meta
       124 |       unsigned char end
       125 |       unsigned char gap
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct maple_range_64
         0 |   struct maple_pnode * parent
         4 |   unsigned long[31] pivot
       128 |   union maple_range_64::(anonymous at ../include/linux/maple_tree.h:106:2) 
       128 |     void *[32] slot
       128 |     struct maple_range_64::(anonymous at ../include/linux/maple_tree.h:108:3) 
       128 |       void *[31] pad
       252 |       struct maple_metadata meta
       252 |         unsigned char end
       253 |         unsigned char gap
           | [sizeof=256, align=4]

*** Dumping AST Record Layout
         0 | struct maple_arange_64
         0 |   struct maple_pnode * parent
         4 |   unsigned long[20] pivot
        84 |   void *[21] slot
       168 |   unsigned long[21] gap
       252 |   struct maple_metadata meta
       252 |     unsigned char end
       253 |     unsigned char gap
           | [sizeof=256, align=4]

*** Dumping AST Record Layout
         0 | struct maple_alloc
         0 |   unsigned long total
         4 |   unsigned char node_count
         8 |   unsigned int request_count
        12 |   struct maple_alloc *[61] slot
           | [sizeof=256, align=4]

*** Dumping AST Record Layout
         0 | union maple_node::(anonymous at ../include/linux/maple_tree.h:281:2)
         0 |   struct maple_node::(anonymous at ../include/linux/maple_tree.h:282:3) 
         0 |     struct maple_pnode * parent
         4 |     void *[63] slot
         0 |   struct maple_node::(anonymous at ../include/linux/maple_tree.h:286:3) 
         0 |     void * pad
         4 |     struct callback_head rcu
         4 |       struct callback_head * next
         8 |       void (*)(struct callback_head *) func
        12 |     struct maple_enode * piv_parent
        16 |     unsigned char parent_slot
        20 |     enum maple_type type
        24 |     unsigned char slot_len
        28 |     unsigned int ma_flags
         0 |   struct maple_range_64 mr64
         0 |     struct maple_pnode * parent
         4 |     unsigned long[31] pivot
       128 |     union maple_range_64::(anonymous at ../include/linux/maple_tree.h:106:2) 
       128 |       void *[32] slot
       128 |       struct maple_range_64::(anonymous at ../include/linux/maple_tree.h:108:3) 
       128 |         void *[31] pad
       252 |         struct maple_metadata meta
       252 |           unsigned char end
       253 |           unsigned char gap
         0 |   struct maple_arange_64 ma64
         0 |     struct maple_pnode * parent
         4 |     unsigned long[20] pivot
        84 |     void *[21] slot
       168 |     unsigned long[21] gap
       252 |     struct maple_metadata meta
       252 |       unsigned char end
       253 |       unsigned char gap
         0 |   struct maple_alloc alloc
         0 |     unsigned long total
         4 |     unsigned char node_count
         8 |     unsigned int request_count
        12 |     struct maple_alloc *[61] slot
           | [sizeof=256, align=4]

*** Dumping AST Record Layout
         0 | struct ma_state
         0 |   struct maple_tree * tree
         4 |   unsigned long index
         8 |   unsigned long last
        12 |   struct maple_enode * node
        16 |   unsigned long min
        20 |   unsigned long max
        24 |   struct maple_alloc * alloc
        28 |   enum maple_status status
        32 |   unsigned char depth
        33 |   unsigned char offset
        34 |   unsigned char mas_flags
        35 |   unsigned char end
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct maple_tree
         0 |   union maple_tree::(anonymous at ../include/linux/maple_tree.h:220:2) 
         0 |     struct spinlock ma_lock
         0 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |         struct raw_spinlock rlock
         0 |           arch_spinlock_t raw_lock
         0 |             volatile unsigned int slock
         4 |           unsigned int magic
         8 |           unsigned int owner_cpu
        12 |           void * owner
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |           u8[16] __padding
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |     lockdep_map_p ma_external_lock
        44 |   unsigned int ma_flags
        48 |   void * ma_root
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct rw_semaphore
         0 |   atomic_t count
         0 |     int counter
         4 |   atomic_t owner
         4 |     int counter
         8 |   struct raw_spinlock wait_lock
         8 |     arch_spinlock_t raw_lock
         8 |       volatile unsigned int slock
        12 |     unsigned int magic
        16 |     unsigned int owner_cpu
        20 |     void * owner
        24 |     struct lockdep_map dep_map
        24 |       struct lock_class_key * key
        28 |       struct lock_class *[2] class_cache
        36 |       const char * name
        40 |       u8 wait_type_outer
        41 |       u8 wait_type_inner
        42 |       u8 lock_type
        44 |       int cpu
        48 |       unsigned long ip
        52 |   struct list_head wait_list
        52 |     struct list_head * next
        56 |     struct list_head * prev
        60 |   void * magic
        64 |   struct lockdep_map dep_map
        64 |     struct lock_class_key * key
        68 |     struct lock_class *[2] class_cache
        76 |     const char * name
        80 |     u8 wait_type_outer
        81 |     u8 wait_type_inner
        82 |     u8 lock_type
        84 |     int cpu
        88 |     unsigned long ip
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct swait_queue_head
         0 |   struct raw_spinlock lock
         0 |     arch_spinlock_t raw_lock
         0 |       volatile unsigned int slock
         4 |     unsigned int magic
         8 |     unsigned int owner_cpu
        12 |     void * owner
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   struct list_head task_list
        44 |     struct list_head * next
        48 |     struct list_head * prev
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/math64.h:197:3)
         0 |   u32 low
         4 |   u32 high
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union (unnamed at ../include/linux/math64.h:195:2)
         0 |   u64 ll
         0 |   struct (unnamed at ../include/linux/math64.h:197:3) l
         0 |     u32 low
         4 |     u32 high
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/linux/math64.h:261:3)
         0 |   u32 low
         4 |   u32 high
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union (unnamed at ../include/linux/math64.h:259:2)
         0 |   u64 ll
         0 |   struct (unnamed at ../include/linux/math64.h:261:3) l
         0 |     u32 low
         4 |     u32 high
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct __kernel_timespec
         0 |   __kernel_time64_t tv_sec
         8 |   long long tv_nsec
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct __kernel_old_timeval
         0 |   __kernel_long_t tv_sec
         4 |   __kernel_long_t tv_usec
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct timespec64
         0 |   time64_t tv_sec
         8 |   long tv_nsec
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct old_timespec32
         0 |   old_time32_t tv_sec
         4 |   s32 tv_nsec
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct timer_list
         0 |   struct hlist_node entry
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         8 |   unsigned long expires
        12 |   void (*)(struct timer_list *) function
        16 |   u32 flags
        20 |   struct lockdep_map lockdep_map
        20 |     struct lock_class_key * key
        24 |     struct lock_class *[2] class_cache
        32 |     const char * name
        36 |     u8 wait_type_outer
        37 |     u8 wait_type_inner
        38 |     u8 lock_type
        40 |     int cpu
        44 |     unsigned long ip
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct work_struct
         0 |   atomic_t data
         0 |     int counter
         4 |   struct list_head entry
         4 |     struct list_head * next
         8 |     struct list_head * prev
        12 |   work_func_t func
        16 |   struct lockdep_map lockdep_map
        16 |     struct lock_class_key * key
        20 |     struct lock_class *[2] class_cache
        28 |     const char * name
        32 |     u8 wait_type_outer
        33 |     u8 wait_type_inner
        34 |     u8 lock_type
        36 |     int cpu
        40 |     unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct delayed_work
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   struct timer_list timer
        44 |     struct hlist_node entry
        44 |       struct hlist_node * next
        48 |       struct hlist_node ** pprev
        52 |     unsigned long expires
        56 |     void (*)(struct timer_list *) function
        60 |     u32 flags
        64 |     struct lockdep_map lockdep_map
        64 |       struct lock_class_key * key
        68 |       struct lock_class *[2] class_cache
        76 |       const char * name
        80 |       u8 wait_type_outer
        81 |       u8 wait_type_inner
        82 |       u8 lock_type
        84 |       int cpu
        88 |       unsigned long ip
        92 |   struct workqueue_struct * wq
        96 |   int cpu
           | [sizeof=100, align=4]

*** Dumping AST Record Layout
         0 | struct rcu_work
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   struct callback_head rcu
        44 |     struct callback_head * next
        48 |     void (*)(struct callback_head *) func
        52 |   struct workqueue_struct * wq
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct codetag
         0 |   unsigned int flags
         4 |   unsigned int lineno
         8 |   const char * modname
        12 |   const char * function
        16 |   const char * filename
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct kmsan_context_state
         0 |   char[800] param_tls
       800 |   char[800] retval_tls
      1600 |   char[800] va_arg_tls
      2400 |   char[800] va_arg_origin_tls
      3200 |   u64 va_arg_overflow_size_tls
      3208 |   char[800] param_origin_tls
      4008 |   u32 retval_origin_tls
           | [sizeof=4016, align=8]

*** Dumping AST Record Layout
         0 | struct timerqueue_node
         0 |   struct rb_node node
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
        16 |   ktime_t expires
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct user_regs_struct
         0 |   unsigned long r0
         4 |   unsigned long r1
         8 |   unsigned long r2
        12 |   unsigned long r3
        16 |   unsigned long r4
        20 |   unsigned long r5
        24 |   unsigned long r6
        28 |   unsigned long r7
        32 |   unsigned long r8
        36 |   unsigned long r9
        40 |   unsigned long r10
        44 |   unsigned long r11
        48 |   unsigned long r12
        52 |   unsigned long r13
        56 |   unsigned long r14
        60 |   unsigned long r15
        64 |   unsigned long r16
        68 |   unsigned long r17
        72 |   unsigned long r18
        76 |   unsigned long r19
        80 |   unsigned long r20
        84 |   unsigned long r21
        88 |   unsigned long r22
        92 |   unsigned long r23
        96 |   unsigned long r24
       100 |   unsigned long r25
       104 |   unsigned long r26
       108 |   unsigned long r27
       112 |   unsigned long r28
       116 |   unsigned long r29
       120 |   unsigned long r30
       124 |   unsigned long r31
       128 |   unsigned long sa0
       132 |   unsigned long lc0
       136 |   unsigned long sa1
       140 |   unsigned long lc1
       144 |   unsigned long m0
       148 |   unsigned long m1
       152 |   unsigned long usr
       156 |   unsigned long p3_0
       160 |   unsigned long gp
       164 |   unsigned long ugp
       168 |   unsigned long pc
       172 |   unsigned long cause
       176 |   unsigned long badva
       180 |   unsigned long cs0
       184 |   unsigned long cs1
       188 |   unsigned long pad1
           | [sizeof=192, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2)
         0 |   __kernel_pid_t _pid
         4 |   __kernel_uid32_t _uid
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union sigval
         0 |   int sival_int
         0 |   void * sival_ptr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2)
         0 |   __kernel_timer_t _tid
         4 |   int _overrun
         8 |   union sigval _sigval
         8 |     int sival_int
         8 |     void * sival_ptr
        12 |   int _sys_private
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2)
         0 |   __kernel_pid_t _pid
         4 |   __kernel_uid32_t _uid
         8 |   union sigval _sigval
         8 |     int sival_int
         8 |     void * sival_ptr
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2)
         0 |   __kernel_pid_t _pid
         4 |   __kernel_uid32_t _uid
         8 |   int _status
        12 |   __kernel_clock_t _utime
        16 |   __kernel_clock_t _stime
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4)
         0 |   char[4] _dummy_bnd
         4 |   void * _lower
         8 |   void * _upper
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4)
         0 |   char[4] _dummy_pkey
         4 |   __u32 _pkey
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4)
         0 |   unsigned long _data
         4 |   __u32 _type
         8 |   __u32 _flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3)
         0 |   int _trapno
         0 |   short _addr_lsb
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
         0 |     char[4] _dummy_bnd
         4 |     void * _lower
         8 |     void * _upper
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
         0 |     char[4] _dummy_pkey
         4 |     __u32 _pkey
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
         0 |     unsigned long _data
         4 |     __u32 _type
         8 |     __u32 _flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2)
         0 |   void * _addr
         4 |   union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
         4 |     int _trapno
         4 |     short _addr_lsb
         4 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
         4 |       char[4] _dummy_bnd
         8 |       void * _lower
        12 |       void * _upper
         4 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
         4 |       char[4] _dummy_pkey
         8 |       __u32 _pkey
         4 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
         4 |       unsigned long _data
         8 |       __u32 _type
        12 |       __u32 _flags
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2)
         0 |   long _band
         4 |   int _fd
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2)
         0 |   void * _call_addr
         4 |   int _syscall
         8 |   unsigned int _arch
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union __sifields
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
         0 |     __kernel_pid_t _pid
         4 |     __kernel_uid32_t _uid
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
         0 |     __kernel_timer_t _tid
         4 |     int _overrun
         8 |     union sigval _sigval
         8 |       int sival_int
         8 |       void * sival_ptr
        12 |     int _sys_private
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
         0 |     __kernel_pid_t _pid
         4 |     __kernel_uid32_t _uid
         8 |     union sigval _sigval
         8 |       int sival_int
         8 |       void * sival_ptr
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
         0 |     __kernel_pid_t _pid
         4 |     __kernel_uid32_t _uid
         8 |     int _status
        12 |     __kernel_clock_t _utime
        16 |     __kernel_clock_t _stime
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
         0 |     void * _addr
         4 |     union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
         4 |       int _trapno
         4 |       short _addr_lsb
         4 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
         4 |         char[4] _dummy_bnd
         8 |         void * _lower
        12 |         void * _upper
         4 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
         4 |         char[4] _dummy_pkey
         8 |         __u32 _pkey
         4 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
         4 |         unsigned long _data
         8 |         __u32 _type
        12 |         __u32 _flags
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
         0 |     long _band
         4 |     int _fd
         0 |   struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
         0 |     void * _call_addr
         4 |     int _syscall
         8 |     unsigned int _arch
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct siginfo::(anonymous at ../include/uapi/asm-generic/siginfo.h:136:3)
         0 |   int si_signo
         4 |   int si_errno
         8 |   int si_code
        12 |   union __sifields _sifields
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
        12 |       __kernel_timer_t _tid
        16 |       int _overrun
        20 |       union sigval _sigval
        20 |         int sival_int
        20 |         void * sival_ptr
        24 |       int _sys_private
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        20 |       union sigval _sigval
        20 |         int sival_int
        20 |         void * sival_ptr
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        20 |       int _status
        24 |       __kernel_clock_t _utime
        28 |       __kernel_clock_t _stime
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
        12 |       void * _addr
        16 |       union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
        16 |         int _trapno
        16 |         short _addr_lsb
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
        16 |           char[4] _dummy_bnd
        20 |           void * _lower
        24 |           void * _upper
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
        16 |           char[4] _dummy_pkey
        20 |           __u32 _pkey
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
        16 |           unsigned long _data
        20 |           __u32 _type
        24 |           __u32 _flags
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
        12 |       long _band
        16 |       int _fd
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
        12 |       void * _call_addr
        16 |       int _syscall
        20 |       unsigned int _arch
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | union siginfo::(anonymous at ../include/uapi/asm-generic/siginfo.h:135:2)
         0 |   struct siginfo::(anonymous at ../include/uapi/asm-generic/siginfo.h:136:3) 
         0 |     int si_signo
         4 |     int si_errno
         8 |     int si_code
        12 |     union __sifields _sifields
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
        12 |         __kernel_timer_t _tid
        16 |         int _overrun
        20 |         union sigval _sigval
        20 |           int sival_int
        20 |           void * sival_ptr
        24 |         int _sys_private
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        20 |         union sigval _sigval
        20 |           int sival_int
        20 |           void * sival_ptr
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        20 |         int _status
        24 |         __kernel_clock_t _utime
        28 |         __kernel_clock_t _stime
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
        12 |         void * _addr
        16 |         union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
        16 |           int _trapno
        16 |           short _addr_lsb
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
        16 |             char[4] _dummy_bnd
        20 |             void * _lower
        24 |             void * _upper
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
        16 |             char[4] _dummy_pkey
        20 |             __u32 _pkey
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
        16 |             unsigned long _data
        20 |             __u32 _type
        24 |             __u32 _flags
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
        12 |         long _band
        16 |         int _fd
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
        12 |         void * _call_addr
        16 |         int _syscall
        20 |         unsigned int _arch
         0 |   int[32] _si_pad
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct kernel_siginfo::(anonymous at ../include/linux/signal_types.h:13:2)
         0 |   int si_signo
         4 |   int si_errno
         8 |   int si_code
        12 |   union __sifields _sifields
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
        12 |       __kernel_timer_t _tid
        16 |       int _overrun
        20 |       union sigval _sigval
        20 |         int sival_int
        20 |         void * sival_ptr
        24 |       int _sys_private
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        20 |       union sigval _sigval
        20 |         int sival_int
        20 |         void * sival_ptr
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
        12 |       __kernel_pid_t _pid
        16 |       __kernel_uid32_t _uid
        20 |       int _status
        24 |       __kernel_clock_t _utime
        28 |       __kernel_clock_t _stime
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
        12 |       void * _addr
        16 |       union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
        16 |         int _trapno
        16 |         short _addr_lsb
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
        16 |           char[4] _dummy_bnd
        20 |           void * _lower
        24 |           void * _upper
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
        16 |           char[4] _dummy_pkey
        20 |           __u32 _pkey
        16 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
        16 |           unsigned long _data
        20 |           __u32 _type
        24 |           __u32 _flags
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
        12 |       long _band
        16 |       int _fd
        12 |     struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
        12 |       void * _call_addr
        16 |       int _syscall
        20 |       unsigned int _arch
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | sigset_t
         0 |   unsigned long[2] sig
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sigaction
         0 |   __sighandler_t sa_handler
         4 |   unsigned long sa_flags
         8 |   sigset_t sa_mask
         8 |     unsigned long[2] sig
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct k_sigaction
         0 |   struct sigaction sa
         0 |     __sighandler_t sa_handler
         4 |     unsigned long sa_flags
         8 |     sigset_t sa_mask
         8 |       unsigned long[2] sig
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct timerqueue_head
         0 |   struct rb_root_cached rb_root
         0 |     struct rb_root rb_root
         0 |       struct rb_node * rb_node
         4 |     struct rb_node * rb_leftmost
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct posix_cputimer_base
         0 |   u64 nextevt
         8 |   struct timerqueue_head tqhead
         8 |     struct rb_root_cached rb_root
         8 |       struct rb_root rb_root
         8 |         struct rb_node * rb_node
        12 |       struct rb_node * rb_leftmost
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct load_weight
         0 |   unsigned long weight
         4 |   u32 inv_weight
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct rcu_special::(unnamed at ../include/linux/sched.h:725:2)
         0 |   u8 blocked
         1 |   u8 need_qs
         2 |   u8 exp_hint
         3 |   u8 need_mb
           | [sizeof=4, align=1]

*** Dumping AST Record Layout
         0 | struct held_lock
         0 |   u64 prev_chain_key
         8 |   unsigned long acquire_ip
        12 |   struct lockdep_map * instance
        16 |   struct lockdep_map * nest_lock
        24 |   u64 waittime_stamp
        32 |   u64 holdtime_stamp
   40:0-12 |   unsigned int class_idx
    41:5-6 |   unsigned int irq_context
    41:7-7 |   unsigned int trylock
    42:0-1 |   unsigned int read
    42:2-2 |   unsigned int check
    42:3-3 |   unsigned int hardirqs_off
    42:4-4 |   unsigned int sync
   42:5-15 |   unsigned int references
        44 |   unsigned int pin_count
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct sched_entity
         0 |   struct load_weight load
         0 |     unsigned long weight
         4 |     u32 inv_weight
         8 |   struct rb_node run_node
         8 |     unsigned long __rb_parent_color
        12 |     struct rb_node * rb_right
        16 |     struct rb_node * rb_left
        24 |   u64 deadline
        32 |   u64 min_vruntime
        40 |   struct list_head group_node
        40 |     struct list_head * next
        44 |     struct list_head * prev
        48 |   unsigned int on_rq
        56 |   u64 exec_start
        64 |   u64 sum_exec_runtime
        72 |   u64 prev_sum_exec_runtime
        80 |   u64 vruntime
        88 |   s64 vlag
        96 |   u64 slice
       104 |   u64 nr_migrations
           | [sizeof=112, align=8]

*** Dumping AST Record Layout
         0 | struct sched_rt_entity
         0 |   struct list_head run_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   unsigned long timeout
        12 |   unsigned long watchdog_stamp
        16 |   unsigned int time_slice
        20 |   unsigned short on_rq
        22 |   unsigned short on_list
        24 |   struct sched_rt_entity * back
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct hrtimer
         0 |   struct timerqueue_node node
         0 |     struct rb_node node
         0 |       unsigned long __rb_parent_color
         4 |       struct rb_node * rb_right
         8 |       struct rb_node * rb_left
        16 |     ktime_t expires
        24 |   ktime_t _softexpires
        32 |   enum hrtimer_restart (*)(struct hrtimer *) function
        36 |   struct hrtimer_clock_base * base
        40 |   u8 state
        41 |   u8 is_rel
        42 |   u8 is_soft
        43 |   u8 is_hard
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct sched_dl_entity
         0 |   struct rb_node rb_node
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
        16 |   u64 dl_runtime
        24 |   u64 dl_deadline
        32 |   u64 dl_period
        40 |   u64 dl_bw
        48 |   u64 dl_density
        56 |   s64 runtime
        64 |   u64 deadline
        72 |   unsigned int flags
    76:0-0 |   unsigned int dl_throttled
    76:1-1 |   unsigned int dl_yielded
    76:2-2 |   unsigned int dl_non_contending
    76:3-3 |   unsigned int dl_overrun
    76:4-4 |   unsigned int dl_server
        80 |   struct hrtimer dl_timer
        80 |     struct timerqueue_node node
        80 |       struct rb_node node
        80 |         unsigned long __rb_parent_color
        84 |         struct rb_node * rb_right
        88 |         struct rb_node * rb_left
        96 |       ktime_t expires
       104 |     ktime_t _softexpires
       112 |     enum hrtimer_restart (*)(struct hrtimer *) function
       116 |     struct hrtimer_clock_base * base
       120 |     u8 state
       121 |     u8 is_rel
       122 |     u8 is_soft
       123 |     u8 is_hard
       128 |   struct hrtimer inactive_timer
       128 |     struct timerqueue_node node
       128 |       struct rb_node node
       128 |         unsigned long __rb_parent_color
       132 |         struct rb_node * rb_right
       136 |         struct rb_node * rb_left
       144 |       ktime_t expires
       152 |     ktime_t _softexpires
       160 |     enum hrtimer_restart (*)(struct hrtimer *) function
       164 |     struct hrtimer_clock_base * base
       168 |     u8 state
       169 |     u8 is_rel
       170 |     u8 is_soft
       171 |     u8 is_hard
       176 |   struct rq * rq
       180 |   dl_server_has_tasks_f server_has_tasks
       184 |   dl_server_pick_f server_pick
       188 |   struct sched_dl_entity * pi_se
           | [sizeof=192, align=8]

*** Dumping AST Record Layout
         0 | struct sched_statistics
           | [sizeof=0, align=32]

*** Dumping AST Record Layout
         0 | union rcu_special
         0 |   struct rcu_special::(unnamed at ../include/linux/sched.h:725:2) b
         0 |     u8 blocked
         1 |     u8 need_qs
         2 |     u8 exp_hint
         3 |     u8 need_mb
         0 |   u32 s
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sched_info
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | union restart_block::(anonymous at ../include/linux/restart_block.h:42:4)
         0 |   struct __kernel_timespec * rmtp
         0 |   struct old_timespec32 * compat_rmtp
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct restart_block::(unnamed at ../include/linux/restart_block.h:39:3)
         0 |   clockid_t clockid
         4 |   enum timespec_type type
         8 |   union restart_block::(anonymous at ../include/linux/restart_block.h:42:4) 
         8 |     struct __kernel_timespec * rmtp
         8 |     struct old_timespec32 * compat_rmtp
        16 |   u64 expires
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct restart_block::(unnamed at ../include/linux/restart_block.h:49:3)
         0 |   struct pollfd * ufds
         4 |   int nfds
         8 |   int has_timeout
        12 |   unsigned long tv_sec
        16 |   unsigned long tv_nsec
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | union restart_block::(anonymous at ../include/linux/restart_block.h:28:2)
         0 |   struct restart_block::(unnamed at ../include/linux/restart_block.h:30:3) futex
         0 |     u32 * uaddr
         4 |     u32 val
         8 |     u32 flags
        12 |     u32 bitset
        16 |     u64 time
        24 |     u32 * uaddr2
         0 |   struct restart_block::(unnamed at ../include/linux/restart_block.h:39:3) nanosleep
         0 |     clockid_t clockid
         4 |     enum timespec_type type
         8 |     union restart_block::(anonymous at ../include/linux/restart_block.h:42:4) 
         8 |       struct __kernel_timespec * rmtp
         8 |       struct old_timespec32 * compat_rmtp
        16 |     u64 expires
         0 |   struct restart_block::(unnamed at ../include/linux/restart_block.h:49:3) poll
         0 |     struct pollfd * ufds
         4 |     int nfds
         8 |     int has_timeout
        12 |     unsigned long tv_sec
        16 |     unsigned long tv_nsec
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct restart_block
         0 |   unsigned long arch_data
         4 |   long (*)(struct restart_block *) fn
         8 |   union restart_block::(anonymous at ../include/linux/restart_block.h:28:2) 
         8 |     struct restart_block::(unnamed at ../include/linux/restart_block.h:30:3) futex
         8 |       u32 * uaddr
        12 |       u32 val
        16 |       u32 flags
        20 |       u32 bitset
        24 |       u64 time
        32 |       u32 * uaddr2
         8 |     struct restart_block::(unnamed at ../include/linux/restart_block.h:39:3) nanosleep
         8 |       clockid_t clockid
        12 |       enum timespec_type type
        16 |       union restart_block::(anonymous at ../include/linux/restart_block.h:42:4) 
        16 |         struct __kernel_timespec * rmtp
        16 |         struct old_timespec32 * compat_rmtp
        24 |       u64 expires
         8 |     struct restart_block::(unnamed at ../include/linux/restart_block.h:49:3) poll
         8 |       struct pollfd * ufds
        12 |       int nfds
        16 |       int has_timeout
        20 |       unsigned long tv_sec
        24 |       unsigned long tv_nsec
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct prev_cputime
         0 |   u64 utime
         8 |   u64 stime
        16 |   struct raw_spinlock lock
        16 |     arch_spinlock_t raw_lock
        16 |       volatile unsigned int slock
        20 |     unsigned int magic
        24 |     unsigned int owner_cpu
        28 |     void * owner
        32 |     struct lockdep_map dep_map
        32 |       struct lock_class_key * key
        36 |       struct lock_class *[2] class_cache
        44 |       const char * name
        48 |       u8 wait_type_outer
        49 |       u8 wait_type_inner
        50 |       u8 lock_type
        52 |       int cpu
        56 |       unsigned long ip
           | [sizeof=64, align=8]

*** Dumping AST Record Layout
         0 | struct posix_cputimers
         0 |   struct posix_cputimer_base[3] bases
        48 |   unsigned int timers_active
        52 |   unsigned int expiry_active
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct sysv_sem
         0 |   struct sem_undo_list * undo_list
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sysv_shm
         0 |   struct list_head shm_clist
         0 |     struct list_head * next
         4 |     struct list_head * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sigpending
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   sigset_t signal
         8 |     unsigned long[2] sig
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct seccomp
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct syscall_user_dispatch
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct wake_q_node
         0 |   struct wake_q_node * next
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct irqtrace_events
         0 |   unsigned int irq_events
         4 |   unsigned long hardirq_enable_ip
         8 |   unsigned long hardirq_disable_ip
        12 |   unsigned int hardirq_enable_event
        16 |   unsigned int hardirq_disable_event
        20 |   unsigned long softirq_disable_ip
        24 |   unsigned long softirq_enable_ip
        28 |   unsigned int softirq_disable_event
        32 |   unsigned int softirq_enable_event
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct task_io_accounting
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct tlbflush_unmap_batch
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct page_frag
         0 |   struct page * page
         4 |   __u16 offset
         6 |   __u16 size
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct kmap_ctrl
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct thread_struct
         0 |   void * switch_sp
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct task_struct
         0 |   unsigned int __state
         4 |   unsigned int saved_state
         8 |   void * stack
        12 |   struct refcount_struct usage
        12 |     atomic_t refs
        12 |       int counter
        16 |   unsigned int flags
        20 |   unsigned int ptrace
        24 |   int on_rq
        28 |   int prio
        32 |   int static_prio
        36 |   int normal_prio
        40 |   unsigned int rt_priority
        48 |   struct sched_entity se
        48 |     struct load_weight load
        48 |       unsigned long weight
        52 |       u32 inv_weight
        56 |     struct rb_node run_node
        56 |       unsigned long __rb_parent_color
        60 |       struct rb_node * rb_right
        64 |       struct rb_node * rb_left
        72 |     u64 deadline
        80 |     u64 min_vruntime
        88 |     struct list_head group_node
        88 |       struct list_head * next
        92 |       struct list_head * prev
        96 |     unsigned int on_rq
       104 |     u64 exec_start
       112 |     u64 sum_exec_runtime
       120 |     u64 prev_sum_exec_runtime
       128 |     u64 vruntime
       136 |     s64 vlag
       144 |     u64 slice
       152 |     u64 nr_migrations
       160 |   struct sched_rt_entity rt
       160 |     struct list_head run_list
       160 |       struct list_head * next
       164 |       struct list_head * prev
       168 |     unsigned long timeout
       172 |     unsigned long watchdog_stamp
       176 |     unsigned int time_slice
       180 |     unsigned short on_rq
       182 |     unsigned short on_list
       184 |     struct sched_rt_entity * back
       192 |   struct sched_dl_entity dl
       192 |     struct rb_node rb_node
       192 |       unsigned long __rb_parent_color
       196 |       struct rb_node * rb_right
       200 |       struct rb_node * rb_left
       208 |     u64 dl_runtime
       216 |     u64 dl_deadline
       224 |     u64 dl_period
       232 |     u64 dl_bw
       240 |     u64 dl_density
       248 |     s64 runtime
       256 |     u64 deadline
       264 |     unsigned int flags
   268:0-0 |     unsigned int dl_throttled
   268:1-1 |     unsigned int dl_yielded
   268:2-2 |     unsigned int dl_non_contending
   268:3-3 |     unsigned int dl_overrun
   268:4-4 |     unsigned int dl_server
       272 |     struct hrtimer dl_timer
       272 |       struct timerqueue_node node
       272 |         struct rb_node node
       272 |           unsigned long __rb_parent_color
       276 |           struct rb_node * rb_right
       280 |           struct rb_node * rb_left
       288 |         ktime_t expires
       296 |       ktime_t _softexpires
       304 |       enum hrtimer_restart (*)(struct hrtimer *) function
       308 |       struct hrtimer_clock_base * base
       312 |       u8 state
       313 |       u8 is_rel
       314 |       u8 is_soft
       315 |       u8 is_hard
       320 |     struct hrtimer inactive_timer
       320 |       struct timerqueue_node node
       320 |         struct rb_node node
       320 |           unsigned long __rb_parent_color
       324 |           struct rb_node * rb_right
       328 |           struct rb_node * rb_left
       336 |         ktime_t expires
       344 |       ktime_t _softexpires
       352 |       enum hrtimer_restart (*)(struct hrtimer *) function
       356 |       struct hrtimer_clock_base * base
       360 |       u8 state
       361 |       u8 is_rel
       362 |       u8 is_soft
       363 |       u8 is_hard
       368 |     struct rq * rq
       372 |     dl_server_has_tasks_f server_has_tasks
       376 |     dl_server_pick_f server_pick
       380 |     struct sched_dl_entity * pi_se
       384 |   struct sched_dl_entity * dl_server
       388 |   const struct sched_class * sched_class
       416 |   struct sched_statistics stats
       416 |   unsigned int policy
       420 |   unsigned long max_allowed_capacity
       424 |   int nr_cpus_allowed
       428 |   const cpumask_t * cpus_ptr
       432 |   cpumask_t * user_cpus_ptr
       436 |   struct cpumask cpus_mask
       436 |     unsigned long[1] bits
       440 |   void * migration_pending
       444 |   unsigned short migration_flags
       448 |   int trc_reader_nesting
       452 |   int trc_ipi_to_cpu
       456 |   union rcu_special trc_reader_special
       456 |     struct rcu_special::(unnamed at ../include/linux/sched.h:725:2) b
       456 |       u8 blocked
       457 |       u8 need_qs
       458 |       u8 exp_hint
       459 |       u8 need_mb
       456 |     u32 s
       460 |   struct list_head trc_holdout_list
       460 |     struct list_head * next
       464 |     struct list_head * prev
       468 |   struct list_head trc_blkd_node
       468 |     struct list_head * next
       472 |     struct list_head * prev
       476 |   int trc_blkd_cpu
       480 |   struct sched_info sched_info
       480 |   struct list_head tasks
       480 |     struct list_head * next
       484 |     struct list_head * prev
       488 |   struct mm_struct * mm
       492 |   struct mm_struct * active_mm
       496 |   struct address_space * faults_disabled_mapping
       500 |   int exit_state
       504 |   int exit_code
       508 |   int exit_signal
       512 |   int pdeath_signal
       516 |   unsigned long jobctl
       520 |   unsigned int personality
   524:0-0 |   unsigned int sched_reset_on_fork
   524:1-1 |   unsigned int sched_contributes_to_load
   524:2-2 |   unsigned int sched_migrated
     528:- |   unsigned int 
   528:0-0 |   unsigned int sched_remote_wakeup
   528:1-1 |   unsigned int sched_rt_mutex
   528:2-2 |   unsigned int in_execve
   528:3-3 |   unsigned int in_iowait
   528:4-4 |   unsigned int in_lru_fault
   528:5-5 |   unsigned int in_memstall
   528:6-6 |   unsigned int in_page_owner
       532 |   unsigned long atomic_flags
       536 |   struct restart_block restart_block
       536 |     unsigned long arch_data
       540 |     long (*)(struct restart_block *) fn
       544 |     union restart_block::(anonymous at ../include/linux/restart_block.h:28:2) 
       544 |       struct restart_block::(unnamed at ../include/linux/restart_block.h:30:3) futex
       544 |         u32 * uaddr
       548 |         u32 val
       552 |         u32 flags
       556 |         u32 bitset
       560 |         u64 time
       568 |         u32 * uaddr2
       544 |       struct restart_block::(unnamed at ../include/linux/restart_block.h:39:3) nanosleep
       544 |         clockid_t clockid
       548 |         enum timespec_type type
       552 |         union restart_block::(anonymous at ../include/linux/restart_block.h:42:4) 
       552 |           struct __kernel_timespec * rmtp
       552 |           struct old_timespec32 * compat_rmtp
       560 |         u64 expires
       544 |       struct restart_block::(unnamed at ../include/linux/restart_block.h:49:3) poll
       544 |         struct pollfd * ufds
       548 |         int nfds
       552 |         int has_timeout
       556 |         unsigned long tv_sec
       560 |         unsigned long tv_nsec
       576 |   pid_t pid
       580 |   pid_t tgid
       584 |   struct task_struct * real_parent
       588 |   struct task_struct * parent
       592 |   struct list_head children
       592 |     struct list_head * next
       596 |     struct list_head * prev
       600 |   struct list_head sibling
       600 |     struct list_head * next
       604 |     struct list_head * prev
       608 |   struct task_struct * group_leader
       612 |   struct list_head ptraced
       612 |     struct list_head * next
       616 |     struct list_head * prev
       620 |   struct list_head ptrace_entry
       620 |     struct list_head * next
       624 |     struct list_head * prev
       628 |   struct pid * thread_pid
       632 |   struct hlist_node[4] pid_links
       664 |   struct list_head thread_node
       664 |     struct list_head * next
       668 |     struct list_head * prev
       672 |   struct completion * vfork_done
       676 |   int * set_child_tid
       680 |   int * clear_child_tid
       684 |   void * worker_private
       688 |   u64 utime
       696 |   u64 stime
       704 |   u64 gtime
       712 |   struct prev_cputime prev_cputime
       712 |     u64 utime
       720 |     u64 stime
       728 |     struct raw_spinlock lock
       728 |       arch_spinlock_t raw_lock
       728 |         volatile unsigned int slock
       732 |       unsigned int magic
       736 |       unsigned int owner_cpu
       740 |       void * owner
       744 |       struct lockdep_map dep_map
       744 |         struct lock_class_key * key
       748 |         struct lock_class *[2] class_cache
       756 |         const char * name
       760 |         u8 wait_type_outer
       761 |         u8 wait_type_inner
       762 |         u8 lock_type
       764 |         int cpu
       768 |         unsigned long ip
       776 |   unsigned long nvcsw
       780 |   unsigned long nivcsw
       784 |   u64 start_time
       792 |   u64 start_boottime
       800 |   unsigned long min_flt
       804 |   unsigned long maj_flt
       808 |   struct posix_cputimers posix_cputimers
       808 |     struct posix_cputimer_base[3] bases
       856 |     unsigned int timers_active
       860 |     unsigned int expiry_active
       864 |   const struct cred * ptracer_cred
       868 |   const struct cred * real_cred
       872 |   const struct cred * cred
       876 |   struct key * cached_requested_key
       880 |   char[16] comm
       896 |   struct nameidata * nameidata
       900 |   struct sysv_sem sysvsem
       900 |     struct sem_undo_list * undo_list
       904 |   struct sysv_shm sysvshm
       904 |     struct list_head shm_clist
       904 |       struct list_head * next
       908 |       struct list_head * prev
       912 |   struct fs_struct * fs
       916 |   struct files_struct * files
       920 |   struct nsproxy * nsproxy
       924 |   struct signal_struct * signal
       928 |   struct sighand_struct * sighand
       932 |   sigset_t blocked
       932 |     unsigned long[2] sig
       940 |   sigset_t real_blocked
       940 |     unsigned long[2] sig
       948 |   sigset_t saved_sigmask
       948 |     unsigned long[2] sig
       956 |   struct sigpending pending
       956 |     struct list_head list
       956 |       struct list_head * next
       960 |       struct list_head * prev
       964 |     sigset_t signal
       964 |       unsigned long[2] sig
       972 |   unsigned long sas_ss_sp
       976 |   size_t sas_ss_size
       980 |   unsigned int sas_ss_flags
       984 |   struct callback_head * task_works
       988 |   struct seccomp seccomp
       988 |   struct syscall_user_dispatch syscall_dispatch
       992 |   u64 parent_exec_id
      1000 |   u64 self_exec_id
      1008 |   struct spinlock alloc_lock
      1008 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1008 |       struct raw_spinlock rlock
      1008 |         arch_spinlock_t raw_lock
      1008 |           volatile unsigned int slock
      1012 |         unsigned int magic
      1016 |         unsigned int owner_cpu
      1020 |         void * owner
      1024 |         struct lockdep_map dep_map
      1024 |           struct lock_class_key * key
      1028 |           struct lock_class *[2] class_cache
      1036 |           const char * name
      1040 |           u8 wait_type_outer
      1041 |           u8 wait_type_inner
      1042 |           u8 lock_type
      1044 |           int cpu
      1048 |           unsigned long ip
      1008 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1008 |         u8[16] __padding
      1024 |         struct lockdep_map dep_map
      1024 |           struct lock_class_key * key
      1028 |           struct lock_class *[2] class_cache
      1036 |           const char * name
      1040 |           u8 wait_type_outer
      1041 |           u8 wait_type_inner
      1042 |           u8 lock_type
      1044 |           int cpu
      1048 |           unsigned long ip
      1052 |   struct raw_spinlock pi_lock
      1052 |     arch_spinlock_t raw_lock
      1052 |       volatile unsigned int slock
      1056 |     unsigned int magic
      1060 |     unsigned int owner_cpu
      1064 |     void * owner
      1068 |     struct lockdep_map dep_map
      1068 |       struct lock_class_key * key
      1072 |       struct lock_class *[2] class_cache
      1080 |       const char * name
      1084 |       u8 wait_type_outer
      1085 |       u8 wait_type_inner
      1086 |       u8 lock_type
      1088 |       int cpu
      1092 |       unsigned long ip
      1096 |   struct wake_q_node wake_q
      1096 |     struct wake_q_node * next
      1100 |   struct rb_root_cached pi_waiters
      1100 |     struct rb_root rb_root
      1100 |       struct rb_node * rb_node
      1104 |     struct rb_node * rb_leftmost
      1108 |   struct task_struct * pi_top_task
      1112 |   struct rt_mutex_waiter * pi_blocked_on
      1116 |   struct mutex_waiter * blocked_on
      1120 |   struct irqtrace_events irqtrace
      1120 |     unsigned int irq_events
      1124 |     unsigned long hardirq_enable_ip
      1128 |     unsigned long hardirq_disable_ip
      1132 |     unsigned int hardirq_enable_event
      1136 |     unsigned int hardirq_disable_event
      1140 |     unsigned long softirq_disable_ip
      1144 |     unsigned long softirq_enable_ip
      1148 |     unsigned int softirq_disable_event
      1152 |     unsigned int softirq_enable_event
      1156 |   unsigned int hardirq_threaded
      1160 |   u64 hardirq_chain_key
      1168 |   int softirqs_enabled
      1172 |   int softirq_context
      1176 |   int irq_config
      1184 |   u64 curr_chain_key
      1192 |   int lockdep_depth
      1196 |   unsigned int lockdep_recursion
      1200 |   struct held_lock[48] held_locks
      3504 |   void * journal_info
      3508 |   struct bio_list * bio_list
      3512 |   struct blk_plug * plug
      3516 |   struct reclaim_state * reclaim_state
      3520 |   struct io_context * io_context
      3524 |   struct capture_control * capture_control
      3528 |   unsigned long ptrace_message
      3532 |   kernel_siginfo_t * last_siginfo
      3536 |   struct task_io_accounting ioac
      3536 |   unsigned int psi_flags
      3540 |   struct robust_list_head * robust_list
      3544 |   struct list_head pi_state_list
      3544 |     struct list_head * next
      3548 |     struct list_head * prev
      3552 |   struct futex_pi_state * pi_state_cache
      3556 |   struct mutex futex_exit_mutex
      3556 |     atomic_t owner
      3556 |       int counter
      3560 |     struct raw_spinlock wait_lock
      3560 |       arch_spinlock_t raw_lock
      3560 |         volatile unsigned int slock
      3564 |       unsigned int magic
      3568 |       unsigned int owner_cpu
      3572 |       void * owner
      3576 |       struct lockdep_map dep_map
      3576 |         struct lock_class_key * key
      3580 |         struct lock_class *[2] class_cache
      3588 |         const char * name
      3592 |         u8 wait_type_outer
      3593 |         u8 wait_type_inner
      3594 |         u8 lock_type
      3596 |         int cpu
      3600 |         unsigned long ip
      3604 |     struct list_head wait_list
      3604 |       struct list_head * next
      3608 |       struct list_head * prev
      3612 |     void * magic
      3616 |     struct lockdep_map dep_map
      3616 |       struct lock_class_key * key
      3620 |       struct lock_class *[2] class_cache
      3628 |       const char * name
      3632 |       u8 wait_type_outer
      3633 |       u8 wait_type_inner
      3634 |       u8 lock_type
      3636 |       int cpu
      3640 |       unsigned long ip
      3644 |   unsigned int futex_state
      3648 |   struct tlbflush_unmap_batch tlb_ubc
      3648 |   struct pipe_inode_info * splice_pipe
      3652 |   struct page_frag task_frag
      3652 |     struct page * page
      3656 |     __u16 offset
      3658 |     __u16 size
      3660 |   int nr_dirtied
      3664 |   int nr_dirtied_pause
      3668 |   unsigned long dirty_paused_when
      3672 |   u64 timer_slack_ns
      3680 |   u64 default_timer_slack_ns
      3688 |   struct kunit * kunit_test
      3692 |   unsigned long trace_recursion
      3696 |   struct kmap_ctrl kmap_ctrl
      3696 |   struct callback_head rcu
      3696 |     struct callback_head * next
      3700 |     void (*)(struct callback_head *) func
      3704 |   struct refcount_struct rcu_users
      3704 |     atomic_t refs
      3704 |       int counter
      3708 |   int pagefault_disabled
      3712 |   struct task_struct * oom_reaper_list
      3716 |   struct timer_list oom_reaper_timer
      3716 |     struct hlist_node entry
      3716 |       struct hlist_node * next
      3720 |       struct hlist_node ** pprev
      3724 |     unsigned long expires
      3728 |     void (*)(struct timer_list *) function
      3732 |     u32 flags
      3736 |     struct lockdep_map lockdep_map
      3736 |       struct lock_class_key * key
      3740 |       struct lock_class *[2] class_cache
      3748 |       const char * name
      3752 |       u8 wait_type_outer
      3753 |       u8 wait_type_inner
      3754 |       u8 lock_type
      3756 |       int cpu
      3760 |       unsigned long ip
      3764 |   struct bpf_local_storage * bpf_storage
      3768 |   struct bpf_run_ctx * bpf_ctx
      3772 |   struct bpf_net_context * bpf_net_context
      3776 |   struct thread_struct thread
      3776 |     void * switch_sp
           | [sizeof=3840, align=64]

*** Dumping AST Record Layout
         0 | struct pcpu_group_info
         0 |   int nr_units
         4 |   unsigned long base_offset
         8 |   unsigned int * cpu_map
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct percpu_counter
         0 |   s64 count
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct page::(anonymous at ../include/linux/mm_types.h:92:5)
         0 |   void * __filler
         4 |   unsigned int mlock_count
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union page::(anonymous at ../include/linux/mm_types.h:88:4)
         0 |   struct list_head lru
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         0 |     void * __filler
         4 |     unsigned int mlock_count
         0 |   struct list_head buddy_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct list_head pcp_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union page::(anonymous at ../include/linux/mm_types.h:105:4)
         0 |   unsigned long index
         0 |   unsigned long share
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct page::(anonymous at ../include/linux/mm_types.h:82:3)
         0 |   union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         0 |     struct list_head lru
         0 |       struct list_head * next
         4 |       struct list_head * prev
         0 |     struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         0 |       void * __filler
         4 |       unsigned int mlock_count
         0 |     struct list_head buddy_list
         0 |       struct list_head * next
         4 |       struct list_head * prev
         0 |     struct list_head pcp_list
         0 |       struct list_head * next
         4 |       struct list_head * prev
         8 |   struct address_space * mapping
        12 |   union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        12 |     unsigned long index
        12 |     unsigned long share
        16 |   unsigned long private
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct folio::(anonymous at ../include/linux/mm_types.h:333:5)
         0 |   void * __filler
         4 |   unsigned int mlock_count
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union folio::(anonymous at ../include/linux/mm_types.h:330:4)
         0 |   struct list_head lru
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct folio::(anonymous at ../include/linux/mm_types.h:333:5) 
         0 |     void * __filler
         4 |     unsigned int mlock_count
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | swp_entry_t
         0 |   unsigned long val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union folio::(anonymous at ../include/linux/mm_types.h:343:4)
         0 |   void * private
         0 |   swp_entry_t swap
         0 |     unsigned long val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct folio::(anonymous at ../include/linux/mm_types.h:327:3)
         0 |   unsigned long flags
         4 |   union folio::(anonymous at ../include/linux/mm_types.h:330:4) 
         4 |     struct list_head lru
         4 |       struct list_head * next
         8 |       struct list_head * prev
         4 |     struct folio::(anonymous at ../include/linux/mm_types.h:333:5) 
         4 |       void * __filler
         8 |       unsigned int mlock_count
        12 |   struct address_space * mapping
        16 |   unsigned long index
        20 |   union folio::(anonymous at ../include/linux/mm_types.h:343:4) 
        20 |     void * private
        20 |     swp_entry_t swap
        20 |       unsigned long val
        24 |   atomic_t _mapcount
        24 |     int counter
        28 |   atomic_t _refcount
        28 |     int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct folio::(anonymous at ../include/linux/mm_types.h:365:3)
         0 |   unsigned long _flags_1
         4 |   unsigned long _head_1
         8 |   atomic_t _large_mapcount
         8 |     int counter
        12 |   atomic_t _entire_mapcount
        12 |     int counter
        16 |   atomic_t _nr_pages_mapped
        16 |     int counter
        20 |   atomic_t _pincount
        20 |     int counter
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct folio::(anonymous at ../include/linux/mm_types.h:381:3)
         0 |   unsigned long _flags_2
         4 |   unsigned long _head_2
         8 |   void * _hugetlb_subpool
        12 |   void * _hugetlb_cgroup
        16 |   void * _hugetlb_cgroup_rsvd
        20 |   void * _hugetlb_hwpoison
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct page::(anonymous at ../include/linux/mm_types.h:117:3)
         0 |   unsigned long pp_magic
         4 |   struct page_pool * pp
         8 |   unsigned long _pp_mapping_pad
        12 |   unsigned long dma_addr
        16 |   atomic_t pp_ref_count
        16 |     int counter
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct page::(anonymous at ../include/linux/mm_types.h:128:3)
         0 |   unsigned long compound_head
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct page::(anonymous at ../include/linux/mm_types.h:131:3)
         0 |   struct dev_pagemap * pgmap
         4 |   void * zone_device_data
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union page::(anonymous at ../include/linux/mm_types.h:81:2)
         0 |   struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         0 |     union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         0 |       struct list_head lru
         0 |         struct list_head * next
         4 |         struct list_head * prev
         0 |       struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         0 |         void * __filler
         4 |         unsigned int mlock_count
         0 |       struct list_head buddy_list
         0 |         struct list_head * next
         4 |         struct list_head * prev
         0 |       struct list_head pcp_list
         0 |         struct list_head * next
         4 |         struct list_head * prev
         8 |     struct address_space * mapping
        12 |     union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        12 |       unsigned long index
        12 |       unsigned long share
        16 |     unsigned long private
         0 |   struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         0 |     unsigned long pp_magic
         4 |     struct page_pool * pp
         8 |     unsigned long _pp_mapping_pad
        12 |     unsigned long dma_addr
        16 |     atomic_t pp_ref_count
        16 |       int counter
         0 |   struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         0 |     unsigned long compound_head
         0 |   struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         0 |     struct dev_pagemap * pgmap
         4 |     void * zone_device_data
         0 |   struct callback_head callback_head
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | union page::(anonymous at ../include/linux/mm_types.h:151:2)
         0 |   unsigned int page_type
         0 |   atomic_t _mapcount
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct page
         0 |   unsigned long flags
         4 |   union page::(anonymous at ../include/linux/mm_types.h:81:2) 
         4 |     struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         4 |       union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         4 |         struct list_head lru
         4 |           struct list_head * next
         8 |           struct list_head * prev
         4 |         struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         4 |           void * __filler
         8 |           unsigned int mlock_count
         4 |         struct list_head buddy_list
         4 |           struct list_head * next
         8 |           struct list_head * prev
         4 |         struct list_head pcp_list
         4 |           struct list_head * next
         8 |           struct list_head * prev
        12 |       struct address_space * mapping
        16 |       union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        16 |         unsigned long index
        16 |         unsigned long share
        20 |       unsigned long private
         4 |     struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         4 |       unsigned long pp_magic
         8 |       struct page_pool * pp
        12 |       unsigned long _pp_mapping_pad
        16 |       unsigned long dma_addr
        20 |       atomic_t pp_ref_count
        20 |         int counter
         4 |     struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         4 |       unsigned long compound_head
         4 |     struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         4 |       struct dev_pagemap * pgmap
         8 |       void * zone_device_data
         4 |     struct callback_head callback_head
         4 |       struct callback_head * next
         8 |       void (*)(struct callback_head *) func
        24 |   union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        24 |     unsigned int page_type
        24 |     atomic_t _mapcount
        24 |       int counter
        28 |   atomic_t _refcount
        28 |     int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | union folio::(anonymous at ../include/linux/mm_types.h:326:2)
         0 |   struct folio::(anonymous at ../include/linux/mm_types.h:327:3) 
         0 |     unsigned long flags
         4 |     union folio::(anonymous at ../include/linux/mm_types.h:330:4) 
         4 |       struct list_head lru
         4 |         struct list_head * next
         8 |         struct list_head * prev
         4 |       struct folio::(anonymous at ../include/linux/mm_types.h:333:5) 
         4 |         void * __filler
         8 |         unsigned int mlock_count
        12 |     struct address_space * mapping
        16 |     unsigned long index
        20 |     union folio::(anonymous at ../include/linux/mm_types.h:343:4) 
        20 |       void * private
        20 |       swp_entry_t swap
        20 |         unsigned long val
        24 |     atomic_t _mapcount
        24 |       int counter
        28 |     atomic_t _refcount
        28 |       int counter
         0 |   struct page page
         0 |     unsigned long flags
         4 |     union page::(anonymous at ../include/linux/mm_types.h:81:2) 
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         4 |         union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         4 |           struct list_head lru
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         4 |             void * __filler
         8 |             unsigned int mlock_count
         4 |           struct list_head buddy_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct list_head pcp_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
        12 |         struct address_space * mapping
        16 |         union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        16 |           unsigned long index
        16 |           unsigned long share
        20 |         unsigned long private
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         4 |         unsigned long pp_magic
         8 |         struct page_pool * pp
        12 |         unsigned long _pp_mapping_pad
        16 |         unsigned long dma_addr
        20 |         atomic_t pp_ref_count
        20 |           int counter
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         4 |         unsigned long compound_head
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         4 |         struct dev_pagemap * pgmap
         8 |         void * zone_device_data
         4 |       struct callback_head callback_head
         4 |         struct callback_head * next
         8 |         void (*)(struct callback_head *) func
        24 |     union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        24 |       unsigned int page_type
        24 |       atomic_t _mapcount
        24 |         int counter
        28 |     atomic_t _refcount
        28 |       int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | union folio::(anonymous at ../include/linux/mm_types.h:364:2)
         0 |   struct folio::(anonymous at ../include/linux/mm_types.h:365:3) 
         0 |     unsigned long _flags_1
         4 |     unsigned long _head_1
         8 |     atomic_t _large_mapcount
         8 |       int counter
        12 |     atomic_t _entire_mapcount
        12 |       int counter
        16 |     atomic_t _nr_pages_mapped
        16 |       int counter
        20 |     atomic_t _pincount
        20 |       int counter
         0 |   struct page __page_1
         0 |     unsigned long flags
         4 |     union page::(anonymous at ../include/linux/mm_types.h:81:2) 
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         4 |         union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         4 |           struct list_head lru
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         4 |             void * __filler
         8 |             unsigned int mlock_count
         4 |           struct list_head buddy_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct list_head pcp_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
        12 |         struct address_space * mapping
        16 |         union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        16 |           unsigned long index
        16 |           unsigned long share
        20 |         unsigned long private
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         4 |         unsigned long pp_magic
         8 |         struct page_pool * pp
        12 |         unsigned long _pp_mapping_pad
        16 |         unsigned long dma_addr
        20 |         atomic_t pp_ref_count
        20 |           int counter
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         4 |         unsigned long compound_head
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         4 |         struct dev_pagemap * pgmap
         8 |         void * zone_device_data
         4 |       struct callback_head callback_head
         4 |         struct callback_head * next
         8 |         void (*)(struct callback_head *) func
        24 |     union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        24 |       unsigned int page_type
        24 |       atomic_t _mapcount
        24 |         int counter
        28 |     atomic_t _refcount
        28 |       int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct folio::(anonymous at ../include/linux/mm_types.h:391:3)
         0 |   unsigned long _flags_2a
         4 |   unsigned long _head_2a
         8 |   struct list_head _deferred_list
         8 |     struct list_head * next
        12 |     struct list_head * prev
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union folio::(anonymous at ../include/linux/mm_types.h:380:2)
         0 |   struct folio::(anonymous at ../include/linux/mm_types.h:381:3) 
         0 |     unsigned long _flags_2
         4 |     unsigned long _head_2
         8 |     void * _hugetlb_subpool
        12 |     void * _hugetlb_cgroup
        16 |     void * _hugetlb_cgroup_rsvd
        20 |     void * _hugetlb_hwpoison
         0 |   struct folio::(anonymous at ../include/linux/mm_types.h:391:3) 
         0 |     unsigned long _flags_2a
         4 |     unsigned long _head_2a
         8 |     struct list_head _deferred_list
         8 |       struct list_head * next
        12 |       struct list_head * prev
         0 |   struct page __page_2
         0 |     unsigned long flags
         4 |     union page::(anonymous at ../include/linux/mm_types.h:81:2) 
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         4 |         union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         4 |           struct list_head lru
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         4 |             void * __filler
         8 |             unsigned int mlock_count
         4 |           struct list_head buddy_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
         4 |           struct list_head pcp_list
         4 |             struct list_head * next
         8 |             struct list_head * prev
        12 |         struct address_space * mapping
        16 |         union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        16 |           unsigned long index
        16 |           unsigned long share
        20 |         unsigned long private
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         4 |         unsigned long pp_magic
         8 |         struct page_pool * pp
        12 |         unsigned long _pp_mapping_pad
        16 |         unsigned long dma_addr
        20 |         atomic_t pp_ref_count
        20 |           int counter
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         4 |         unsigned long compound_head
         4 |       struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         4 |         struct dev_pagemap * pgmap
         8 |         void * zone_device_data
         4 |       struct callback_head callback_head
         4 |         struct callback_head * next
         8 |         void (*)(struct callback_head *) func
        24 |     union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        24 |       unsigned int page_type
        24 |       atomic_t _mapcount
        24 |         int counter
        28 |     atomic_t _refcount
        28 |       int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct folio
         0 |   union folio::(anonymous at ../include/linux/mm_types.h:326:2) 
         0 |     struct folio::(anonymous at ../include/linux/mm_types.h:327:3) 
         0 |       unsigned long flags
         4 |       union folio::(anonymous at ../include/linux/mm_types.h:330:4) 
         4 |         struct list_head lru
         4 |           struct list_head * next
         8 |           struct list_head * prev
         4 |         struct folio::(anonymous at ../include/linux/mm_types.h:333:5) 
         4 |           void * __filler
         8 |           unsigned int mlock_count
        12 |       struct address_space * mapping
        16 |       unsigned long index
        20 |       union folio::(anonymous at ../include/linux/mm_types.h:343:4) 
        20 |         void * private
        20 |         swp_entry_t swap
        20 |           unsigned long val
        24 |       atomic_t _mapcount
        24 |         int counter
        28 |       atomic_t _refcount
        28 |         int counter
         0 |     struct page page
         0 |       unsigned long flags
         4 |       union page::(anonymous at ../include/linux/mm_types.h:81:2) 
         4 |         struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
         4 |           union page::(anonymous at ../include/linux/mm_types.h:88:4) 
         4 |             struct list_head lru
         4 |               struct list_head * next
         8 |               struct list_head * prev
         4 |             struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
         4 |               void * __filler
         8 |               unsigned int mlock_count
         4 |             struct list_head buddy_list
         4 |               struct list_head * next
         8 |               struct list_head * prev
         4 |             struct list_head pcp_list
         4 |               struct list_head * next
         8 |               struct list_head * prev
        12 |           struct address_space * mapping
        16 |           union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        16 |             unsigned long index
        16 |             unsigned long share
        20 |           unsigned long private
         4 |         struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
         4 |           unsigned long pp_magic
         8 |           struct page_pool * pp
        12 |           unsigned long _pp_mapping_pad
        16 |           unsigned long dma_addr
        20 |           atomic_t pp_ref_count
        20 |             int counter
         4 |         struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
         4 |           unsigned long compound_head
         4 |         struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
         4 |           struct dev_pagemap * pgmap
         8 |           void * zone_device_data
         4 |         struct callback_head callback_head
         4 |           struct callback_head * next
         8 |           void (*)(struct callback_head *) func
        24 |       union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        24 |         unsigned int page_type
        24 |         atomic_t _mapcount
        24 |           int counter
        28 |       atomic_t _refcount
        28 |         int counter
        32 |   union folio::(anonymous at ../include/linux/mm_types.h:364:2) 
        32 |     struct folio::(anonymous at ../include/linux/mm_types.h:365:3) 
        32 |       unsigned long _flags_1
        36 |       unsigned long _head_1
        40 |       atomic_t _large_mapcount
        40 |         int counter
        44 |       atomic_t _entire_mapcount
        44 |         int counter
        48 |       atomic_t _nr_pages_mapped
        48 |         int counter
        52 |       atomic_t _pincount
        52 |         int counter
        32 |     struct page __page_1
        32 |       unsigned long flags
        36 |       union page::(anonymous at ../include/linux/mm_types.h:81:2) 
        36 |         struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
        36 |           union page::(anonymous at ../include/linux/mm_types.h:88:4) 
        36 |             struct list_head lru
        36 |               struct list_head * next
        40 |               struct list_head * prev
        36 |             struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
        36 |               void * __filler
        40 |               unsigned int mlock_count
        36 |             struct list_head buddy_list
        36 |               struct list_head * next
        40 |               struct list_head * prev
        36 |             struct list_head pcp_list
        36 |               struct list_head * next
        40 |               struct list_head * prev
        44 |           struct address_space * mapping
        48 |           union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        48 |             unsigned long index
        48 |             unsigned long share
        52 |           unsigned long private
        36 |         struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
        36 |           unsigned long pp_magic
        40 |           struct page_pool * pp
        44 |           unsigned long _pp_mapping_pad
        48 |           unsigned long dma_addr
        52 |           atomic_t pp_ref_count
        52 |             int counter
        36 |         struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
        36 |           unsigned long compound_head
        36 |         struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
        36 |           struct dev_pagemap * pgmap
        40 |           void * zone_device_data
        36 |         struct callback_head callback_head
        36 |           struct callback_head * next
        40 |           void (*)(struct callback_head *) func
        56 |       union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        56 |         unsigned int page_type
        56 |         atomic_t _mapcount
        56 |           int counter
        60 |       atomic_t _refcount
        60 |         int counter
        64 |   union folio::(anonymous at ../include/linux/mm_types.h:380:2) 
        64 |     struct folio::(anonymous at ../include/linux/mm_types.h:381:3) 
        64 |       unsigned long _flags_2
        68 |       unsigned long _head_2
        72 |       void * _hugetlb_subpool
        76 |       void * _hugetlb_cgroup
        80 |       void * _hugetlb_cgroup_rsvd
        84 |       void * _hugetlb_hwpoison
        64 |     struct folio::(anonymous at ../include/linux/mm_types.h:391:3) 
        64 |       unsigned long _flags_2a
        68 |       unsigned long _head_2a
        72 |       struct list_head _deferred_list
        72 |         struct list_head * next
        76 |         struct list_head * prev
        64 |     struct page __page_2
        64 |       unsigned long flags
        68 |       union page::(anonymous at ../include/linux/mm_types.h:81:2) 
        68 |         struct page::(anonymous at ../include/linux/mm_types.h:82:3) 
        68 |           union page::(anonymous at ../include/linux/mm_types.h:88:4) 
        68 |             struct list_head lru
        68 |               struct list_head * next
        72 |               struct list_head * prev
        68 |             struct page::(anonymous at ../include/linux/mm_types.h:92:5) 
        68 |               void * __filler
        72 |               unsigned int mlock_count
        68 |             struct list_head buddy_list
        68 |               struct list_head * next
        72 |               struct list_head * prev
        68 |             struct list_head pcp_list
        68 |               struct list_head * next
        72 |               struct list_head * prev
        76 |           struct address_space * mapping
        80 |           union page::(anonymous at ../include/linux/mm_types.h:105:4) 
        80 |             unsigned long index
        80 |             unsigned long share
        84 |           unsigned long private
        68 |         struct page::(anonymous at ../include/linux/mm_types.h:117:3) 
        68 |           unsigned long pp_magic
        72 |           struct page_pool * pp
        76 |           unsigned long _pp_mapping_pad
        80 |           unsigned long dma_addr
        84 |           atomic_t pp_ref_count
        84 |             int counter
        68 |         struct page::(anonymous at ../include/linux/mm_types.h:128:3) 
        68 |           unsigned long compound_head
        68 |         struct page::(anonymous at ../include/linux/mm_types.h:131:3) 
        68 |           struct dev_pagemap * pgmap
        72 |           void * zone_device_data
        68 |         struct callback_head callback_head
        68 |           struct callback_head * next
        72 |           void (*)(struct callback_head *) func
        88 |       union page::(anonymous at ../include/linux/mm_types.h:151:2) 
        88 |         unsigned int page_type
        88 |         atomic_t _mapcount
        88 |           int counter
        92 |       atomic_t _refcount
        92 |         int counter
           | [sizeof=96, align=4]

*** Dumping AST Record Layout
         0 | struct ptdesc::(anonymous at ../include/linux/mm_types.h:463:3)
         0 |   unsigned long _pt_pad_1
         4 |   pgtable_t pmd_huge_pte
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ptdesc::(anonymous at ../include/linux/mm_types.h:460:2)
         0 |   struct callback_head pt_rcu_head
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
         0 |   struct list_head pt_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct ptdesc::(anonymous at ../include/linux/mm_types.h:463:3) 
         0 |     unsigned long _pt_pad_1
         4 |     pgtable_t pmd_huge_pte
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ptdesc::(anonymous at ../include/linux/mm_types.h:470:2)
         0 |   unsigned long pt_index
         0 |   struct mm_struct * pt_mm
         0 |   atomic_t pt_frag_refcount
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union ptdesc::(anonymous at ../include/linux/mm_types.h:476:2)
         0 |   unsigned long _pt_pad_2
         0 |   spinlock_t * ptl
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ptdesc
         0 |   unsigned long __page_flags
         4 |   union ptdesc::(anonymous at ../include/linux/mm_types.h:460:2) 
         4 |     struct callback_head pt_rcu_head
         4 |       struct callback_head * next
         8 |       void (*)(struct callback_head *) func
         4 |     struct list_head pt_list
         4 |       struct list_head * next
         8 |       struct list_head * prev
         4 |     struct ptdesc::(anonymous at ../include/linux/mm_types.h:463:3) 
         4 |       unsigned long _pt_pad_1
         8 |       pgtable_t pmd_huge_pte
        12 |   unsigned long __page_mapping
        16 |   union ptdesc::(anonymous at ../include/linux/mm_types.h:470:2) 
        16 |     unsigned long pt_index
        16 |     struct mm_struct * pt_mm
        16 |     atomic_t pt_frag_refcount
        16 |       int counter
        20 |   union ptdesc::(anonymous at ../include/linux/mm_types.h:476:2) 
        20 |     unsigned long _pt_pad_2
        20 |     spinlock_t * ptl
        24 |   unsigned int __page_type
        28 |   atomic_t __page_refcount
        28 |     int counter
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct vm_area_struct::(anonymous at ../include/linux/mm_types.h:668:3)
         0 |   unsigned long vm_start
         4 |   unsigned long vm_end
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union vm_area_struct::(anonymous at ../include/linux/mm_types.h:667:2)
         0 |   struct vm_area_struct::(anonymous at ../include/linux/mm_types.h:668:3) 
         0 |     unsigned long vm_start
         4 |     unsigned long vm_end
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct mm_struct::(anonymous at ../include/linux/mm_types.h:785:3)
         0 |   atomic_t mm_count
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct mm_context
         0 |   unsigned long long generation
         8 |   unsigned long ptbase
        12 |   struct hexagon_vdso * vdso
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct uprobes_state
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct mm_struct::(anonymous at ../include/linux/mm_types.h:780:2)
         0 |   struct mm_struct::(anonymous at ../include/linux/mm_types.h:785:3) 
         0 |     atomic_t mm_count
         0 |       int counter
         4 |   struct maple_tree mm_mt
         4 |     union maple_tree::(anonymous at ../include/linux/maple_tree.h:220:2) 
         4 |       struct spinlock ma_lock
         4 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         4 |           struct raw_spinlock rlock
         4 |             arch_spinlock_t raw_lock
         4 |               volatile unsigned int slock
         8 |             unsigned int magic
        12 |             unsigned int owner_cpu
        16 |             void * owner
        20 |             struct lockdep_map dep_map
        20 |               struct lock_class_key * key
        24 |               struct lock_class *[2] class_cache
        32 |               const char * name
        36 |               u8 wait_type_outer
        37 |               u8 wait_type_inner
        38 |               u8 lock_type
        40 |               int cpu
        44 |               unsigned long ip
         4 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         4 |             u8[16] __padding
        20 |             struct lockdep_map dep_map
        20 |               struct lock_class_key * key
        24 |               struct lock_class *[2] class_cache
        32 |               const char * name
        36 |               u8 wait_type_outer
        37 |               u8 wait_type_inner
        38 |               u8 lock_type
        40 |               int cpu
        44 |               unsigned long ip
         4 |       lockdep_map_p ma_external_lock
        48 |     unsigned int ma_flags
        52 |     void * ma_root
        56 |   unsigned long mmap_base
        60 |   unsigned long mmap_legacy_base
        64 |   unsigned long task_size
        68 |   pgd_t * pgd
        72 |   atomic_t membarrier_state
        72 |     int counter
        76 |   atomic_t mm_users
        76 |     int counter
        80 |   atomic_t pgtables_bytes
        80 |     int counter
        84 |   int map_count
        88 |   struct spinlock page_table_lock
        88 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        88 |       struct raw_spinlock rlock
        88 |         arch_spinlock_t raw_lock
        88 |           volatile unsigned int slock
        92 |         unsigned int magic
        96 |         unsigned int owner_cpu
       100 |         void * owner
       104 |         struct lockdep_map dep_map
       104 |           struct lock_class_key * key
       108 |           struct lock_class *[2] class_cache
       116 |           const char * name
       120 |           u8 wait_type_outer
       121 |           u8 wait_type_inner
       122 |           u8 lock_type
       124 |           int cpu
       128 |           unsigned long ip
        88 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        88 |         u8[16] __padding
       104 |         struct lockdep_map dep_map
       104 |           struct lock_class_key * key
       108 |           struct lock_class *[2] class_cache
       116 |           const char * name
       120 |           u8 wait_type_outer
       121 |           u8 wait_type_inner
       122 |           u8 lock_type
       124 |           int cpu
       128 |           unsigned long ip
       132 |   struct rw_semaphore mmap_lock
       132 |     atomic_t count
       132 |       int counter
       136 |     atomic_t owner
       136 |       int counter
       140 |     struct raw_spinlock wait_lock
       140 |       arch_spinlock_t raw_lock
       140 |         volatile unsigned int slock
       144 |       unsigned int magic
       148 |       unsigned int owner_cpu
       152 |       void * owner
       156 |       struct lockdep_map dep_map
       156 |         struct lock_class_key * key
       160 |         struct lock_class *[2] class_cache
       168 |         const char * name
       172 |         u8 wait_type_outer
       173 |         u8 wait_type_inner
       174 |         u8 lock_type
       176 |         int cpu
       180 |         unsigned long ip
       184 |     struct list_head wait_list
       184 |       struct list_head * next
       188 |       struct list_head * prev
       192 |     void * magic
       196 |     struct lockdep_map dep_map
       196 |       struct lock_class_key * key
       200 |       struct lock_class *[2] class_cache
       208 |       const char * name
       212 |       u8 wait_type_outer
       213 |       u8 wait_type_inner
       214 |       u8 lock_type
       216 |       int cpu
       220 |       unsigned long ip
       224 |   struct list_head mmlist
       224 |     struct list_head * next
       228 |     struct list_head * prev
       232 |   unsigned long hiwater_rss
       236 |   unsigned long hiwater_vm
       240 |   unsigned long total_vm
       244 |   unsigned long locked_vm
       248 |   atomic64_t pinned_vm
       248 |     s64 counter
       256 |   unsigned long data_vm
       260 |   unsigned long exec_vm
       264 |   unsigned long stack_vm
       268 |   unsigned long def_flags
       272 |   struct seqcount write_protect_seq
       272 |     unsigned int sequence
       276 |     struct lockdep_map dep_map
       276 |       struct lock_class_key * key
       280 |       struct lock_class *[2] class_cache
       288 |       const char * name
       292 |       u8 wait_type_outer
       293 |       u8 wait_type_inner
       294 |       u8 lock_type
       296 |       int cpu
       300 |       unsigned long ip
       304 |   struct spinlock arg_lock
       304 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       304 |       struct raw_spinlock rlock
       304 |         arch_spinlock_t raw_lock
       304 |           volatile unsigned int slock
       308 |         unsigned int magic
       312 |         unsigned int owner_cpu
       316 |         void * owner
       320 |         struct lockdep_map dep_map
       320 |           struct lock_class_key * key
       324 |           struct lock_class *[2] class_cache
       332 |           const char * name
       336 |           u8 wait_type_outer
       337 |           u8 wait_type_inner
       338 |           u8 lock_type
       340 |           int cpu
       344 |           unsigned long ip
       304 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       304 |         u8[16] __padding
       320 |         struct lockdep_map dep_map
       320 |           struct lock_class_key * key
       324 |           struct lock_class *[2] class_cache
       332 |           const char * name
       336 |           u8 wait_type_outer
       337 |           u8 wait_type_inner
       338 |           u8 lock_type
       340 |           int cpu
       344 |           unsigned long ip
       348 |   unsigned long start_code
       352 |   unsigned long end_code
       356 |   unsigned long start_data
       360 |   unsigned long end_data
       364 |   unsigned long start_brk
       368 |   unsigned long brk
       372 |   unsigned long start_stack
       376 |   unsigned long arg_start
       380 |   unsigned long arg_end
       384 |   unsigned long env_start
       388 |   unsigned long env_end
       392 |   unsigned long[46] saved_auxv
       576 |   struct percpu_counter[4] rss_stat
       608 |   struct linux_binfmt * binfmt
       616 |   struct mm_context context
       616 |     unsigned long long generation
       624 |     unsigned long ptbase
       628 |     struct hexagon_vdso * vdso
       632 |   unsigned long flags
       636 |   struct spinlock ioctx_lock
       636 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       636 |       struct raw_spinlock rlock
       636 |         arch_spinlock_t raw_lock
       636 |           volatile unsigned int slock
       640 |         unsigned int magic
       644 |         unsigned int owner_cpu
       648 |         void * owner
       652 |         struct lockdep_map dep_map
       652 |           struct lock_class_key * key
       656 |           struct lock_class *[2] class_cache
       664 |           const char * name
       668 |           u8 wait_type_outer
       669 |           u8 wait_type_inner
       670 |           u8 lock_type
       672 |           int cpu
       676 |           unsigned long ip
       636 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       636 |         u8[16] __padding
       652 |         struct lockdep_map dep_map
       652 |           struct lock_class_key * key
       656 |           struct lock_class *[2] class_cache
       664 |           const char * name
       668 |           u8 wait_type_outer
       669 |           u8 wait_type_inner
       670 |           u8 lock_type
       672 |           int cpu
       676 |           unsigned long ip
       680 |   struct kioctx_table * ioctx_table
       684 |   struct user_namespace * user_ns
       688 |   struct file * exe_file
       692 |   atomic_t tlb_flush_pending
       692 |     int counter
       696 |   struct uprobes_state uprobes_state
       696 |   struct work_struct async_put_work
       696 |     atomic_t data
       696 |       int counter
       700 |     struct list_head entry
       700 |       struct list_head * next
       704 |       struct list_head * prev
       708 |     work_func_t func
       712 |     struct lockdep_map lockdep_map
       712 |       struct lock_class_key * key
       716 |       struct lock_class *[2] class_cache
       724 |       const char * name
       728 |       u8 wait_type_outer
       729 |       u8 wait_type_inner
       730 |       u8 lock_type
       732 |       int cpu
       736 |       unsigned long ip
           | [sizeof=744, align=8]

*** Dumping AST Record Layout
         0 | struct mm_struct
         0 |   struct mm_struct::(anonymous at ../include/linux/mm_types.h:780:2) 
         0 |     struct mm_struct::(anonymous at ../include/linux/mm_types.h:785:3) 
         0 |       atomic_t mm_count
         0 |         int counter
         4 |     struct maple_tree mm_mt
         4 |       union maple_tree::(anonymous at ../include/linux/maple_tree.h:220:2) 
         4 |         struct spinlock ma_lock
         4 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         4 |             struct raw_spinlock rlock
         4 |               arch_spinlock_t raw_lock
         4 |                 volatile unsigned int slock
         8 |               unsigned int magic
        12 |               unsigned int owner_cpu
        16 |               void * owner
        20 |               struct lockdep_map dep_map
        20 |                 struct lock_class_key * key
        24 |                 struct lock_class *[2] class_cache
        32 |                 const char * name
        36 |                 u8 wait_type_outer
        37 |                 u8 wait_type_inner
        38 |                 u8 lock_type
        40 |                 int cpu
        44 |                 unsigned long ip
         4 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         4 |               u8[16] __padding
        20 |               struct lockdep_map dep_map
        20 |                 struct lock_class_key * key
        24 |                 struct lock_class *[2] class_cache
        32 |                 const char * name
        36 |                 u8 wait_type_outer
        37 |                 u8 wait_type_inner
        38 |                 u8 lock_type
        40 |                 int cpu
        44 |                 unsigned long ip
         4 |         lockdep_map_p ma_external_lock
        48 |       unsigned int ma_flags
        52 |       void * ma_root
        56 |     unsigned long mmap_base
        60 |     unsigned long mmap_legacy_base
        64 |     unsigned long task_size
        68 |     pgd_t * pgd
        72 |     atomic_t membarrier_state
        72 |       int counter
        76 |     atomic_t mm_users
        76 |       int counter
        80 |     atomic_t pgtables_bytes
        80 |       int counter
        84 |     int map_count
        88 |     struct spinlock page_table_lock
        88 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        88 |         struct raw_spinlock rlock
        88 |           arch_spinlock_t raw_lock
        88 |             volatile unsigned int slock
        92 |           unsigned int magic
        96 |           unsigned int owner_cpu
       100 |           void * owner
       104 |           struct lockdep_map dep_map
       104 |             struct lock_class_key * key
       108 |             struct lock_class *[2] class_cache
       116 |             const char * name
       120 |             u8 wait_type_outer
       121 |             u8 wait_type_inner
       122 |             u8 lock_type
       124 |             int cpu
       128 |             unsigned long ip
        88 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        88 |           u8[16] __padding
       104 |           struct lockdep_map dep_map
       104 |             struct lock_class_key * key
       108 |             struct lock_class *[2] class_cache
       116 |             const char * name
       120 |             u8 wait_type_outer
       121 |             u8 wait_type_inner
       122 |             u8 lock_type
       124 |             int cpu
       128 |             unsigned long ip
       132 |     struct rw_semaphore mmap_lock
       132 |       atomic_t count
       132 |         int counter
       136 |       atomic_t owner
       136 |         int counter
       140 |       struct raw_spinlock wait_lock
       140 |         arch_spinlock_t raw_lock
       140 |           volatile unsigned int slock
       144 |         unsigned int magic
       148 |         unsigned int owner_cpu
       152 |         void * owner
       156 |         struct lockdep_map dep_map
       156 |           struct lock_class_key * key
       160 |           struct lock_class *[2] class_cache
       168 |           const char * name
       172 |           u8 wait_type_outer
       173 |           u8 wait_type_inner
       174 |           u8 lock_type
       176 |           int cpu
       180 |           unsigned long ip
       184 |       struct list_head wait_list
       184 |         struct list_head * next
       188 |         struct list_head * prev
       192 |       void * magic
       196 |       struct lockdep_map dep_map
       196 |         struct lock_class_key * key
       200 |         struct lock_class *[2] class_cache
       208 |         const char * name
       212 |         u8 wait_type_outer
       213 |         u8 wait_type_inner
       214 |         u8 lock_type
       216 |         int cpu
       220 |         unsigned long ip
       224 |     struct list_head mmlist
       224 |       struct list_head * next
       228 |       struct list_head * prev
       232 |     unsigned long hiwater_rss
       236 |     unsigned long hiwater_vm
       240 |     unsigned long total_vm
       244 |     unsigned long locked_vm
       248 |     atomic64_t pinned_vm
       248 |       s64 counter
       256 |     unsigned long data_vm
       260 |     unsigned long exec_vm
       264 |     unsigned long stack_vm
       268 |     unsigned long def_flags
       272 |     struct seqcount write_protect_seq
       272 |       unsigned int sequence
       276 |       struct lockdep_map dep_map
       276 |         struct lock_class_key * key
       280 |         struct lock_class *[2] class_cache
       288 |         const char * name
       292 |         u8 wait_type_outer
       293 |         u8 wait_type_inner
       294 |         u8 lock_type
       296 |         int cpu
       300 |         unsigned long ip
       304 |     struct spinlock arg_lock
       304 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       304 |         struct raw_spinlock rlock
       304 |           arch_spinlock_t raw_lock
       304 |             volatile unsigned int slock
       308 |           unsigned int magic
       312 |           unsigned int owner_cpu
       316 |           void * owner
       320 |           struct lockdep_map dep_map
       320 |             struct lock_class_key * key
       324 |             struct lock_class *[2] class_cache
       332 |             const char * name
       336 |             u8 wait_type_outer
       337 |             u8 wait_type_inner
       338 |             u8 lock_type
       340 |             int cpu
       344 |             unsigned long ip
       304 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       304 |           u8[16] __padding
       320 |           struct lockdep_map dep_map
       320 |             struct lock_class_key * key
       324 |             struct lock_class *[2] class_cache
       332 |             const char * name
       336 |             u8 wait_type_outer
       337 |             u8 wait_type_inner
       338 |             u8 lock_type
       340 |             int cpu
       344 |             unsigned long ip
       348 |     unsigned long start_code
       352 |     unsigned long end_code
       356 |     unsigned long start_data
       360 |     unsigned long end_data
       364 |     unsigned long start_brk
       368 |     unsigned long brk
       372 |     unsigned long start_stack
       376 |     unsigned long arg_start
       380 |     unsigned long arg_end
       384 |     unsigned long env_start
       388 |     unsigned long env_end
       392 |     unsigned long[46] saved_auxv
       576 |     struct percpu_counter[4] rss_stat
       608 |     struct linux_binfmt * binfmt
       616 |     struct mm_context context
       616 |       unsigned long long generation
       624 |       unsigned long ptbase
       628 |       struct hexagon_vdso * vdso
       632 |     unsigned long flags
       636 |     struct spinlock ioctx_lock
       636 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       636 |         struct raw_spinlock rlock
       636 |           arch_spinlock_t raw_lock
       636 |             volatile unsigned int slock
       640 |           unsigned int magic
       644 |           unsigned int owner_cpu
       648 |           void * owner
       652 |           struct lockdep_map dep_map
       652 |             struct lock_class_key * key
       656 |             struct lock_class *[2] class_cache
       664 |             const char * name
       668 |             u8 wait_type_outer
       669 |             u8 wait_type_inner
       670 |             u8 lock_type
       672 |             int cpu
       676 |             unsigned long ip
       636 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       636 |           u8[16] __padding
       652 |           struct lockdep_map dep_map
       652 |             struct lock_class_key * key
       656 |             struct lock_class *[2] class_cache
       664 |             const char * name
       668 |             u8 wait_type_outer
       669 |             u8 wait_type_inner
       670 |             u8 lock_type
       672 |             int cpu
       676 |             unsigned long ip
       680 |     struct kioctx_table * ioctx_table
       684 |     struct user_namespace * user_ns
       688 |     struct file * exe_file
       692 |     atomic_t tlb_flush_pending
       692 |       int counter
       696 |     struct uprobes_state uprobes_state
       696 |     struct work_struct async_put_work
       696 |       atomic_t data
       696 |         int counter
       700 |       struct list_head entry
       700 |         struct list_head * next
       704 |         struct list_head * prev
       708 |       work_func_t func
       712 |       struct lockdep_map lockdep_map
       712 |         struct lock_class_key * key
       716 |         struct lock_class *[2] class_cache
       724 |         const char * name
       728 |         u8 wait_type_outer
       729 |         u8 wait_type_inner
       730 |         u8 lock_type
       732 |         int cpu
       736 |         unsigned long ip
       744 |   unsigned long[] cpu_bitmap
           | [sizeof=744, align=8]

*** Dumping AST Record Layout
         0 | local_lock_t
         0 |   struct lockdep_map dep_map
         0 |     struct lock_class_key * key
         4 |     struct lock_class *[2] class_cache
        12 |     const char * name
        16 |     u8 wait_type_outer
        17 |     u8 wait_type_inner
        18 |     u8 lock_type
        20 |     int cpu
        24 |     unsigned long ip
        28 |   struct task_struct * owner
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | class_local_lock_irqsave_t
         0 |   local_lock_t * lock
         4 |   unsigned long flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct hlist_nulls_head
         0 |   struct hlist_nulls_node * first
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct free_area
         0 |   struct list_head[6] free_list
        48 |   unsigned long nr_free
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct zone
         0 |   unsigned long[4] _watermark
        16 |   unsigned long watermark_boost
        20 |   unsigned long nr_reserved_highatomic
        24 |   long[2] lowmem_reserve
        32 |   struct pglist_data * zone_pgdat
        36 |   struct per_cpu_pages * per_cpu_pageset
        40 |   struct per_cpu_zonestat * per_cpu_zonestats
        44 |   int pageset_high_min
        48 |   int pageset_high_max
        52 |   int pageset_batch
        56 |   unsigned long * pageblock_flags
        60 |   unsigned long zone_start_pfn
        64 |   atomic_t managed_pages
        64 |     int counter
        68 |   unsigned long spanned_pages
        72 |   unsigned long present_pages
        76 |   unsigned long cma_pages
        80 |   const char * name
        84 |   unsigned long nr_isolate_pageblock
        88 |   int initialized
        92 |   struct free_area[11] free_area
       664 |   unsigned long flags
       668 |   struct spinlock lock
       668 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       668 |       struct raw_spinlock rlock
       668 |         arch_spinlock_t raw_lock
       668 |           volatile unsigned int slock
       672 |         unsigned int magic
       676 |         unsigned int owner_cpu
       680 |         void * owner
       684 |         struct lockdep_map dep_map
       684 |           struct lock_class_key * key
       688 |           struct lock_class *[2] class_cache
       696 |           const char * name
       700 |           u8 wait_type_outer
       701 |           u8 wait_type_inner
       702 |           u8 lock_type
       704 |           int cpu
       708 |           unsigned long ip
       668 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       668 |         u8[16] __padding
       684 |         struct lockdep_map dep_map
       684 |           struct lock_class_key * key
       688 |           struct lock_class *[2] class_cache
       696 |           const char * name
       700 |           u8 wait_type_outer
       701 |           u8 wait_type_inner
       702 |           u8 lock_type
       704 |           int cpu
       708 |           unsigned long ip
       712 |   unsigned long percpu_drift_mark
       716 |   unsigned long compact_cached_free_pfn
       720 |   unsigned long[2] compact_cached_migrate_pfn
       728 |   unsigned long compact_init_migrate_pfn
       732 |   unsigned long compact_init_free_pfn
       736 |   unsigned int compact_considered
       740 |   unsigned int compact_defer_shift
       744 |   int compact_order_failed
       748 |   bool compact_blockskip_flush
       749 |   bool contiguous
       752 |   atomic_long_t[10] vm_stat
       792 |   atomic_long_t[0] vm_numa_event
           | [sizeof=792, align=4]

*** Dumping AST Record Layout
         0 | struct zoneref
         0 |   struct zone * zone
         4 |   int zone_idx
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct zonelist
         0 |   struct zoneref[3] _zonerefs
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct srcu_struct
         0 |   short[2] srcu_lock_nesting
         4 |   u8 srcu_gp_running
         5 |   u8 srcu_gp_waiting
         8 |   unsigned long srcu_idx
        12 |   unsigned long srcu_idx_max
        16 |   struct swait_queue_head srcu_wq
        16 |     struct raw_spinlock lock
        16 |       arch_spinlock_t raw_lock
        16 |         volatile unsigned int slock
        20 |       unsigned int magic
        24 |       unsigned int owner_cpu
        28 |       void * owner
        32 |       struct lockdep_map dep_map
        32 |         struct lock_class_key * key
        36 |         struct lock_class *[2] class_cache
        44 |         const char * name
        48 |         u8 wait_type_outer
        49 |         u8 wait_type_inner
        50 |         u8 lock_type
        52 |         int cpu
        56 |         unsigned long ip
        60 |     struct list_head task_list
        60 |       struct list_head * next
        64 |       struct list_head * prev
        68 |   struct callback_head * srcu_cb_head
        72 |   struct callback_head ** srcu_cb_tail
        76 |   struct work_struct srcu_work
        76 |     atomic_t data
        76 |       int counter
        80 |     struct list_head entry
        80 |       struct list_head * next
        84 |       struct list_head * prev
        88 |     work_func_t func
        92 |     struct lockdep_map lockdep_map
        92 |       struct lock_class_key * key
        96 |       struct lock_class *[2] class_cache
       104 |       const char * name
       108 |       u8 wait_type_outer
       109 |       u8 wait_type_inner
       110 |       u8 lock_type
       112 |       int cpu
       116 |       unsigned long ip
       120 |   struct lockdep_map dep_map
       120 |     struct lock_class_key * key
       124 |     struct lock_class *[2] class_cache
       132 |     const char * name
       136 |     u8 wait_type_outer
       137 |     u8 wait_type_inner
       138 |     u8 lock_type
       140 |     int cpu
       144 |     unsigned long ip
           | [sizeof=148, align=4]

*** Dumping AST Record Layout
         0 | class_srcu_t
         0 |   struct srcu_struct * lock
         4 |   int idx
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct lru_gen_folio
         0 |   unsigned long max_seq
         4 |   unsigned long[2] min_seq
        12 |   unsigned long[4] timestamps
        28 |   struct list_head[4][2][2] folios
       156 |   long[4][2][2] nr_pages
       220 |   unsigned long[2][4] avg_refaulted
       252 |   unsigned long[2][4] avg_total
       284 |   unsigned long[4][2][3] protected
       380 |   atomic_long_t[4][2][4] evicted
       508 |   atomic_long_t[4][2][4] refaulted
       636 |   bool enabled
       637 |   u8 gen
       638 |   u8 seg
       640 |   struct hlist_nulls_node list
       640 |     struct hlist_nulls_node * next
       644 |     struct hlist_nulls_node ** pprev
           | [sizeof=648, align=4]

*** Dumping AST Record Layout
         0 | struct zswap_lruvec_state
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct lruvec
         0 |   struct list_head[5] lists
        40 |   struct spinlock lru_lock
        40 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        40 |       struct raw_spinlock rlock
        40 |         arch_spinlock_t raw_lock
        40 |           volatile unsigned int slock
        44 |         unsigned int magic
        48 |         unsigned int owner_cpu
        52 |         void * owner
        56 |         struct lockdep_map dep_map
        56 |           struct lock_class_key * key
        60 |           struct lock_class *[2] class_cache
        68 |           const char * name
        72 |           u8 wait_type_outer
        73 |           u8 wait_type_inner
        74 |           u8 lock_type
        76 |           int cpu
        80 |           unsigned long ip
        40 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        40 |         u8[16] __padding
        56 |         struct lockdep_map dep_map
        56 |           struct lock_class_key * key
        60 |           struct lock_class *[2] class_cache
        68 |           const char * name
        72 |           u8 wait_type_outer
        73 |           u8 wait_type_inner
        74 |           u8 lock_type
        76 |           int cpu
        80 |           unsigned long ip
        84 |   unsigned long anon_cost
        88 |   unsigned long file_cost
        92 |   atomic_t nonresident_age
        92 |     int counter
        96 |   unsigned long[2] refaults
       104 |   unsigned long flags
       108 |   struct lru_gen_folio lrugen
       108 |     unsigned long max_seq
       112 |     unsigned long[2] min_seq
       120 |     unsigned long[4] timestamps
       136 |     struct list_head[4][2][2] folios
       264 |     long[4][2][2] nr_pages
       328 |     unsigned long[2][4] avg_refaulted
       360 |     unsigned long[2][4] avg_total
       392 |     unsigned long[4][2][3] protected
       488 |     atomic_long_t[4][2][4] evicted
       616 |     atomic_long_t[4][2][4] refaulted
       744 |     bool enabled
       745 |     u8 gen
       746 |     u8 seg
       748 |     struct hlist_nulls_node list
       748 |       struct hlist_nulls_node * next
       752 |       struct hlist_nulls_node ** pprev
       756 |   struct zswap_lruvec_state zswap_lruvec_state
           | [sizeof=756, align=4]

*** Dumping AST Record Layout
         0 | struct lru_gen_mm_walk
         0 |   struct lruvec * lruvec
         4 |   unsigned long seq
         8 |   unsigned long next_addr
        12 |   int[4][2][2] nr_pages
        76 |   int[6] mm_stats
       100 |   int batched
       104 |   bool can_swap
       105 |   bool force_scan
           | [sizeof=108, align=4]

*** Dumping AST Record Layout
         0 | struct lru_gen_memcg
         0 |   unsigned long seq
         4 |   unsigned long[3] nr_memcgs
        16 |   struct hlist_nulls_head[3][8] fifo
       112 |   struct spinlock lock
       112 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       112 |       struct raw_spinlock rlock
       112 |         arch_spinlock_t raw_lock
       112 |           volatile unsigned int slock
       116 |         unsigned int magic
       120 |         unsigned int owner_cpu
       124 |         void * owner
       128 |         struct lockdep_map dep_map
       128 |           struct lock_class_key * key
       132 |           struct lock_class *[2] class_cache
       140 |           const char * name
       144 |           u8 wait_type_outer
       145 |           u8 wait_type_inner
       146 |           u8 lock_type
       148 |           int cpu
       152 |           unsigned long ip
       112 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       112 |         u8[16] __padding
       128 |         struct lockdep_map dep_map
       128 |           struct lock_class_key * key
       132 |           struct lock_class *[2] class_cache
       140 |           const char * name
       144 |           u8 wait_type_outer
       145 |           u8 wait_type_inner
       146 |           u8 lock_type
       148 |           int cpu
       152 |           unsigned long ip
           | [sizeof=156, align=4]

*** Dumping AST Record Layout
         0 | struct pglist_data
         0 |   struct zone[2] node_zones
      1584 |   struct zonelist[1] node_zonelists
      1608 |   int nr_zones
      1612 |   struct page * node_mem_map
      1616 |   struct page_ext * node_page_ext
      1620 |   unsigned long node_start_pfn
      1624 |   unsigned long node_present_pages
      1628 |   unsigned long node_spanned_pages
      1632 |   int node_id
      1636 |   struct wait_queue_head kswapd_wait
      1636 |     struct spinlock lock
      1636 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1636 |         struct raw_spinlock rlock
      1636 |           arch_spinlock_t raw_lock
      1636 |             volatile unsigned int slock
      1640 |           unsigned int magic
      1644 |           unsigned int owner_cpu
      1648 |           void * owner
      1652 |           struct lockdep_map dep_map
      1652 |             struct lock_class_key * key
      1656 |             struct lock_class *[2] class_cache
      1664 |             const char * name
      1668 |             u8 wait_type_outer
      1669 |             u8 wait_type_inner
      1670 |             u8 lock_type
      1672 |             int cpu
      1676 |             unsigned long ip
      1636 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1636 |           u8[16] __padding
      1652 |           struct lockdep_map dep_map
      1652 |             struct lock_class_key * key
      1656 |             struct lock_class *[2] class_cache
      1664 |             const char * name
      1668 |             u8 wait_type_outer
      1669 |             u8 wait_type_inner
      1670 |             u8 lock_type
      1672 |             int cpu
      1676 |             unsigned long ip
      1680 |     struct list_head head
      1680 |       struct list_head * next
      1684 |       struct list_head * prev
      1688 |   struct wait_queue_head pfmemalloc_wait
      1688 |     struct spinlock lock
      1688 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1688 |         struct raw_spinlock rlock
      1688 |           arch_spinlock_t raw_lock
      1688 |             volatile unsigned int slock
      1692 |           unsigned int magic
      1696 |           unsigned int owner_cpu
      1700 |           void * owner
      1704 |           struct lockdep_map dep_map
      1704 |             struct lock_class_key * key
      1708 |             struct lock_class *[2] class_cache
      1716 |             const char * name
      1720 |             u8 wait_type_outer
      1721 |             u8 wait_type_inner
      1722 |             u8 lock_type
      1724 |             int cpu
      1728 |             unsigned long ip
      1688 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1688 |           u8[16] __padding
      1704 |           struct lockdep_map dep_map
      1704 |             struct lock_class_key * key
      1708 |             struct lock_class *[2] class_cache
      1716 |             const char * name
      1720 |             u8 wait_type_outer
      1721 |             u8 wait_type_inner
      1722 |             u8 lock_type
      1724 |             int cpu
      1728 |             unsigned long ip
      1732 |     struct list_head head
      1732 |       struct list_head * next
      1736 |       struct list_head * prev
      1740 |   wait_queue_head_t[4] reclaim_wait
      1948 |   atomic_t nr_writeback_throttled
      1948 |     int counter
      1952 |   unsigned long nr_reclaim_start
      1956 |   struct task_struct * kswapd
      1960 |   int kswapd_order
      1964 |   enum zone_type kswapd_highest_zoneidx
      1968 |   int kswapd_failures
      1972 |   int kcompactd_max_order
      1976 |   enum zone_type kcompactd_highest_zoneidx
      1980 |   struct wait_queue_head kcompactd_wait
      1980 |     struct spinlock lock
      1980 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1980 |         struct raw_spinlock rlock
      1980 |           arch_spinlock_t raw_lock
      1980 |             volatile unsigned int slock
      1984 |           unsigned int magic
      1988 |           unsigned int owner_cpu
      1992 |           void * owner
      1996 |           struct lockdep_map dep_map
      1996 |             struct lock_class_key * key
      2000 |             struct lock_class *[2] class_cache
      2008 |             const char * name
      2012 |             u8 wait_type_outer
      2013 |             u8 wait_type_inner
      2014 |             u8 lock_type
      2016 |             int cpu
      2020 |             unsigned long ip
      1980 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1980 |           u8[16] __padding
      1996 |           struct lockdep_map dep_map
      1996 |             struct lock_class_key * key
      2000 |             struct lock_class *[2] class_cache
      2008 |             const char * name
      2012 |             u8 wait_type_outer
      2013 |             u8 wait_type_inner
      2014 |             u8 lock_type
      2016 |             int cpu
      2020 |             unsigned long ip
      2024 |     struct list_head head
      2024 |       struct list_head * next
      2028 |       struct list_head * prev
      2032 |   struct task_struct * kcompactd
      2036 |   bool proactive_compact_trigger
      2040 |   unsigned long totalreserve_pages
      2044 |   struct lruvec __lruvec
      2044 |     struct list_head[5] lists
      2084 |     struct spinlock lru_lock
      2084 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2084 |         struct raw_spinlock rlock
      2084 |           arch_spinlock_t raw_lock
      2084 |             volatile unsigned int slock
      2088 |           unsigned int magic
      2092 |           unsigned int owner_cpu
      2096 |           void * owner
      2100 |           struct lockdep_map dep_map
      2100 |             struct lock_class_key * key
      2104 |             struct lock_class *[2] class_cache
      2112 |             const char * name
      2116 |             u8 wait_type_outer
      2117 |             u8 wait_type_inner
      2118 |             u8 lock_type
      2120 |             int cpu
      2124 |             unsigned long ip
      2084 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2084 |           u8[16] __padding
      2100 |           struct lockdep_map dep_map
      2100 |             struct lock_class_key * key
      2104 |             struct lock_class *[2] class_cache
      2112 |             const char * name
      2116 |             u8 wait_type_outer
      2117 |             u8 wait_type_inner
      2118 |             u8 lock_type
      2120 |             int cpu
      2124 |             unsigned long ip
      2128 |     unsigned long anon_cost
      2132 |     unsigned long file_cost
      2136 |     atomic_t nonresident_age
      2136 |       int counter
      2140 |     unsigned long[2] refaults
      2148 |     unsigned long flags
      2152 |     struct lru_gen_folio lrugen
      2152 |       unsigned long max_seq
      2156 |       unsigned long[2] min_seq
      2164 |       unsigned long[4] timestamps
      2180 |       struct list_head[4][2][2] folios
      2308 |       long[4][2][2] nr_pages
      2372 |       unsigned long[2][4] avg_refaulted
      2404 |       unsigned long[2][4] avg_total
      2436 |       unsigned long[4][2][3] protected
      2532 |       atomic_long_t[4][2][4] evicted
      2660 |       atomic_long_t[4][2][4] refaulted
      2788 |       bool enabled
      2789 |       u8 gen
      2790 |       u8 seg
      2792 |       struct hlist_nulls_node list
      2792 |         struct hlist_nulls_node * next
      2796 |         struct hlist_nulls_node ** pprev
      2800 |     struct zswap_lruvec_state zswap_lruvec_state
      2800 |   unsigned long flags
      2804 |   struct lru_gen_mm_walk mm_walk
      2804 |     struct lruvec * lruvec
      2808 |     unsigned long seq
      2812 |     unsigned long next_addr
      2816 |     int[4][2][2] nr_pages
      2880 |     int[6] mm_stats
      2904 |     int batched
      2908 |     bool can_swap
      2909 |     bool force_scan
      2912 |   struct lru_gen_memcg memcg_lru
      2912 |     unsigned long seq
      2916 |     unsigned long[3] nr_memcgs
      2928 |     struct hlist_nulls_head[3][8] fifo
      3024 |     struct spinlock lock
      3024 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      3024 |         struct raw_spinlock rlock
      3024 |           arch_spinlock_t raw_lock
      3024 |             volatile unsigned int slock
      3028 |           unsigned int magic
      3032 |           unsigned int owner_cpu
      3036 |           void * owner
      3040 |           struct lockdep_map dep_map
      3040 |             struct lock_class_key * key
      3044 |             struct lock_class *[2] class_cache
      3052 |             const char * name
      3056 |             u8 wait_type_outer
      3057 |             u8 wait_type_inner
      3058 |             u8 lock_type
      3060 |             int cpu
      3064 |             unsigned long ip
      3024 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      3024 |           u8[16] __padding
      3040 |           struct lockdep_map dep_map
      3040 |             struct lock_class_key * key
      3044 |             struct lock_class *[2] class_cache
      3052 |             const char * name
      3056 |             u8 wait_type_outer
      3057 |             u8 wait_type_inner
      3058 |             u8 lock_type
      3060 |             int cpu
      3064 |             unsigned long ip
      3068 |   struct per_cpu_nodestat * per_cpu_nodestats
      3072 |   atomic_long_t[46] vm_stat
           | [sizeof=3256, align=4]

*** Dumping AST Record Layout
         0 | struct page_frag_cache
         0 |   void * va
         4 |   __u16 offset
         6 |   __u16 size
         8 |   unsigned int pagecnt_bias
        12 |   bool pfmemalloc
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct xarray
         0 |   struct spinlock xa_lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   gfp_t xa_flags
        48 |   void * xa_head
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct xa_state
         0 |   struct xarray * xa
         4 |   unsigned long xa_index
         8 |   unsigned char xa_shift
         9 |   unsigned char xa_sibs
        10 |   unsigned char xa_offset
        11 |   unsigned char xa_pad
        12 |   struct xa_node * xa_node
        16 |   struct xa_node * xa_alloc
        20 |   xa_update_node_t xa_update
        24 |   struct list_lru * xa_lru
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | union xa_node::(anonymous at ../include/linux/xarray.h:1169:2)
         0 |   struct list_head private_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct callback_head callback_head
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union xa_node::(anonymous at ../include/linux/xarray.h:1174:2)
         0 |   unsigned long[3][1] tags
         0 |   unsigned long[3][1] marks
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct xa_node
         0 |   unsigned char shift
         1 |   unsigned char offset
         2 |   unsigned char count
         3 |   unsigned char nr_values
         4 |   struct xa_node * parent
         8 |   struct xarray * array
        12 |   union xa_node::(anonymous at ../include/linux/xarray.h:1169:2) 
        12 |     struct list_head private_list
        12 |       struct list_head * next
        16 |       struct list_head * prev
        12 |     struct callback_head callback_head
        12 |       struct callback_head * next
        16 |       void (*)(struct callback_head *) func
        20 |   void *[16] slots
        84 |   union xa_node::(anonymous at ../include/linux/xarray.h:1174:2) 
        84 |     unsigned long[3][1] tags
        84 |     unsigned long[3][1] marks
           | [sizeof=96, align=4]

*** Dumping AST Record Layout
         0 | struct radix_tree_preload
         0 |   local_lock_t lock
         0 |     struct lockdep_map dep_map
         0 |       struct lock_class_key * key
         4 |       struct lock_class *[2] class_cache
        12 |       const char * name
        16 |       u8 wait_type_outer
        17 |       u8 wait_type_inner
        18 |       u8 lock_type
        20 |       int cpu
        24 |       unsigned long ip
        28 |     struct task_struct * owner
        32 |   unsigned int nr
        36 |   struct xa_node * nodes
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct radix_tree_iter
         0 |   unsigned long index
         4 |   unsigned long next_index
         8 |   unsigned long tags
        12 |   struct xa_node * node
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct idr
         0 |   struct xarray idr_rt
         0 |     struct spinlock xa_lock
         0 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |         struct raw_spinlock rlock
         0 |           arch_spinlock_t raw_lock
         0 |             volatile unsigned int slock
         4 |           unsigned int magic
         8 |           unsigned int owner_cpu
        12 |           void * owner
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |           u8[16] __padding
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
        44 |     gfp_t xa_flags
        48 |     void * xa_head
        52 |   unsigned int idr_base
        56 |   unsigned int idr_next
           | [sizeof=60, align=4]

*** Dumping AST Record Layout
         0 | struct ida
         0 |   struct xarray xa
         0 |     struct spinlock xa_lock
         0 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |         struct raw_spinlock rlock
         0 |           arch_spinlock_t raw_lock
         0 |             volatile unsigned int slock
         4 |           unsigned int magic
         8 |           unsigned int owner_cpu
        12 |           void * owner
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |           u8[16] __padding
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
        44 |     gfp_t xa_flags
        48 |     void * xa_head
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct kernfs_elem_dir
         0 |   unsigned long subdirs
         4 |   struct rb_root children
         4 |     struct rb_node * rb_node
         8 |   struct kernfs_root * root
        12 |   unsigned long rev
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct kernfs_elem_symlink
         0 |   struct kernfs_node * target_kn
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct kernfs_elem_attr
         0 |   const struct kernfs_ops * ops
         4 |   struct kernfs_open_node * open
         8 |   loff_t size
        16 |   struct kernfs_node * notify_next
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | union kernfs_node::(anonymous at ../include/linux/kernfs.h:212:2)
         0 |   struct kernfs_elem_dir dir
         0 |     unsigned long subdirs
         4 |     struct rb_root children
         4 |       struct rb_node * rb_node
         8 |     struct kernfs_root * root
        12 |     unsigned long rev
         0 |   struct kernfs_elem_symlink symlink
         0 |     struct kernfs_node * target_kn
         0 |   struct kernfs_elem_attr attr
         0 |     const struct kernfs_ops * ops
         4 |     struct kernfs_open_node * open
         8 |     loff_t size
        16 |     struct kernfs_node * notify_next
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct kernfs_node
         0 |   atomic_t count
         0 |     int counter
         4 |   atomic_t active
         4 |     int counter
         8 |   struct lockdep_map dep_map
         8 |     struct lock_class_key * key
        12 |     struct lock_class *[2] class_cache
        20 |     const char * name
        24 |     u8 wait_type_outer
        25 |     u8 wait_type_inner
        26 |     u8 lock_type
        28 |     int cpu
        32 |     unsigned long ip
        36 |   struct kernfs_node * parent
        40 |   const char * name
        44 |   struct rb_node rb
        44 |     unsigned long __rb_parent_color
        48 |     struct rb_node * rb_right
        52 |     struct rb_node * rb_left
        56 |   const void * ns
        60 |   unsigned int hash
        64 |   unsigned short flags
        66 |   umode_t mode
        72 |   union kernfs_node::(anonymous at ../include/linux/kernfs.h:212:2) 
        72 |     struct kernfs_elem_dir dir
        72 |       unsigned long subdirs
        76 |       struct rb_root children
        76 |         struct rb_node * rb_node
        80 |       struct kernfs_root * root
        84 |       unsigned long rev
        72 |     struct kernfs_elem_symlink symlink
        72 |       struct kernfs_node * target_kn
        72 |     struct kernfs_elem_attr attr
        72 |       const struct kernfs_ops * ops
        76 |       struct kernfs_open_node * open
        80 |       loff_t size
        88 |       struct kernfs_node * notify_next
        96 |   u64 id
       104 |   void * priv
       108 |   struct kernfs_iattrs * iattr
       112 |   struct callback_head rcu
       112 |     struct callback_head * next
       116 |     void (*)(struct callback_head *) func
           | [sizeof=120, align=8]

*** Dumping AST Record Layout
         0 | struct attribute
         0 |   const char * name
         4 |   umode_t mode
     6:0-0 |   bool ignore_lockdep
         8 |   struct lock_class_key * key
        12 |   struct lock_class_key skey
        12 |     union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2) 
        12 |       struct hlist_node hash_entry
        12 |         struct hlist_node * next
        16 |         struct hlist_node ** pprev
        12 |       struct lockdep_subclass_key[8] subkeys
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct kobject
         0 |   const char * name
         4 |   struct list_head entry
         4 |     struct list_head * next
         8 |     struct list_head * prev
        12 |   struct kobject * parent
        16 |   struct kset * kset
        20 |   const struct kobj_type * ktype
        24 |   struct kernfs_node * sd
        28 |   struct kref kref
        28 |     struct refcount_struct refcount
        28 |       atomic_t refs
        28 |         int counter
    32:0-0 |   unsigned int state_initialized
    32:1-1 |   unsigned int state_in_sysfs
    32:2-2 |   unsigned int state_add_uevent_sent
    32:3-3 |   unsigned int state_remove_uevent_sent
    32:4-4 |   unsigned int uevent_suppress
        36 |   struct delayed_work release
        36 |     struct work_struct work
        36 |       atomic_t data
        36 |         int counter
        40 |       struct list_head entry
        40 |         struct list_head * next
        44 |         struct list_head * prev
        48 |       work_func_t func
        52 |       struct lockdep_map lockdep_map
        52 |         struct lock_class_key * key
        56 |         struct lock_class *[2] class_cache
        64 |         const char * name
        68 |         u8 wait_type_outer
        69 |         u8 wait_type_inner
        70 |         u8 lock_type
        72 |         int cpu
        76 |         unsigned long ip
        80 |     struct timer_list timer
        80 |       struct hlist_node entry
        80 |         struct hlist_node * next
        84 |         struct hlist_node ** pprev
        88 |       unsigned long expires
        92 |       void (*)(struct timer_list *) function
        96 |       u32 flags
       100 |       struct lockdep_map lockdep_map
       100 |         struct lock_class_key * key
       104 |         struct lock_class *[2] class_cache
       112 |         const char * name
       116 |         u8 wait_type_outer
       117 |         u8 wait_type_inner
       118 |         u8 lock_type
       120 |         int cpu
       124 |         unsigned long ip
       128 |     struct workqueue_struct * wq
       132 |     int cpu
           | [sizeof=136, align=4]

*** Dumping AST Record Layout
         0 | struct kset
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct spinlock list_lock
         8 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |       struct raw_spinlock rlock
         8 |         arch_spinlock_t raw_lock
         8 |           volatile unsigned int slock
        12 |         unsigned int magic
        16 |         unsigned int owner_cpu
        20 |         void * owner
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
         8 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |         u8[16] __padding
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
        52 |   struct kobject kobj
        52 |     const char * name
        56 |     struct list_head entry
        56 |       struct list_head * next
        60 |       struct list_head * prev
        64 |     struct kobject * parent
        68 |     struct kset * kset
        72 |     const struct kobj_type * ktype
        76 |     struct kernfs_node * sd
        80 |     struct kref kref
        80 |       struct refcount_struct refcount
        80 |         atomic_t refs
        80 |           int counter
    84:0-0 |     unsigned int state_initialized
    84:1-1 |     unsigned int state_in_sysfs
    84:2-2 |     unsigned int state_add_uevent_sent
    84:3-3 |     unsigned int state_remove_uevent_sent
    84:4-4 |     unsigned int uevent_suppress
        88 |     struct delayed_work release
        88 |       struct work_struct work
        88 |         atomic_t data
        88 |           int counter
        92 |         struct list_head entry
        92 |           struct list_head * next
        96 |           struct list_head * prev
       100 |         work_func_t func
       104 |         struct lockdep_map lockdep_map
       104 |           struct lock_class_key * key
       108 |           struct lock_class *[2] class_cache
       116 |           const char * name
       120 |           u8 wait_type_outer
       121 |           u8 wait_type_inner
       122 |           u8 lock_type
       124 |           int cpu
       128 |           unsigned long ip
       132 |       struct timer_list timer
       132 |         struct hlist_node entry
       132 |           struct hlist_node * next
       136 |           struct hlist_node ** pprev
       140 |         unsigned long expires
       144 |         void (*)(struct timer_list *) function
       148 |         u32 flags
       152 |         struct lockdep_map lockdep_map
       152 |           struct lock_class_key * key
       156 |           struct lock_class *[2] class_cache
       164 |           const char * name
       168 |           u8 wait_type_outer
       169 |           u8 wait_type_inner
       170 |           u8 lock_type
       172 |           int cpu
       176 |           unsigned long ip
       180 |       struct workqueue_struct * wq
       184 |       int cpu
       188 |   const struct kset_uevent_ops * uevent_ops
           | [sizeof=192, align=4]

*** Dumping AST Record Layout
         0 | struct ratelimit_state
         0 |   struct raw_spinlock lock
         0 |     arch_spinlock_t raw_lock
         0 |       volatile unsigned int slock
         4 |     unsigned int magic
         8 |     unsigned int owner_cpu
        12 |     void * owner
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   int interval
        48 |   int burst
        52 |   int printed
        56 |   int missed
        60 |   unsigned long begin
        64 |   unsigned long flags
           | [sizeof=68, align=4]

*** Dumping AST Record Layout
         0 | struct em_perf_state
         0 |   unsigned long performance
         4 |   unsigned long frequency
         8 |   unsigned long power
        12 |   unsigned long cost
        16 |   unsigned long flags
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct resource
         0 |   resource_size_t start
         4 |   resource_size_t end
         8 |   const char * name
        12 |   unsigned long flags
        16 |   unsigned long desc
        20 |   struct resource * parent
        24 |   struct resource * sibling
        28 |   struct resource * child
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct seqcount_raw_spinlock
         0 |   struct seqcount seqcount
         0 |     unsigned int sequence
         4 |     struct lockdep_map dep_map
         4 |       struct lock_class_key * key
         8 |       struct lock_class *[2] class_cache
        16 |       const char * name
        20 |       u8 wait_type_outer
        21 |       u8 wait_type_inner
        22 |       u8 lock_type
        24 |       int cpu
        28 |       unsigned long ip
        32 |   raw_spinlock_t * lock
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct hrtimer_clock_base
         0 |   struct hrtimer_cpu_base * cpu_base
         4 |   unsigned int index
         8 |   clockid_t clockid
        12 |   struct seqcount_raw_spinlock seq
        12 |     struct seqcount seqcount
        12 |       unsigned int sequence
        16 |       struct lockdep_map dep_map
        16 |         struct lock_class_key * key
        20 |         struct lock_class *[2] class_cache
        28 |         const char * name
        32 |         u8 wait_type_outer
        33 |         u8 wait_type_inner
        34 |         u8 lock_type
        36 |         int cpu
        40 |         unsigned long ip
        44 |     raw_spinlock_t * lock
        48 |   struct hrtimer * running
        52 |   struct timerqueue_head active
        52 |     struct rb_root_cached rb_root
        52 |       struct rb_root rb_root
        52 |         struct rb_node * rb_node
        56 |       struct rb_node * rb_leftmost
        60 |   ktime_t (*)(void) get_time
        64 |   ktime_t offset
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct pm_message
         0 |   int event
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct dev_pm_ops
         0 |   int (*)(struct device *) prepare
         4 |   void (*)(struct device *) complete
         8 |   int (*)(struct device *) suspend
        12 |   int (*)(struct device *) resume
        16 |   int (*)(struct device *) freeze
        20 |   int (*)(struct device *) thaw
        24 |   int (*)(struct device *) poweroff
        28 |   int (*)(struct device *) restore
        32 |   int (*)(struct device *) suspend_late
        36 |   int (*)(struct device *) resume_early
        40 |   int (*)(struct device *) freeze_late
        44 |   int (*)(struct device *) thaw_early
        48 |   int (*)(struct device *) poweroff_late
        52 |   int (*)(struct device *) restore_early
        56 |   int (*)(struct device *) suspend_noirq
        60 |   int (*)(struct device *) resume_noirq
        64 |   int (*)(struct device *) freeze_noirq
        68 |   int (*)(struct device *) thaw_noirq
        72 |   int (*)(struct device *) poweroff_noirq
        76 |   int (*)(struct device *) restore_noirq
        80 |   int (*)(struct device *) runtime_suspend
        84 |   int (*)(struct device *) runtime_resume
        88 |   int (*)(struct device *) runtime_idle
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct bus_type
         0 |   const char * name
         4 |   const char * dev_name
         8 |   const struct attribute_group ** bus_groups
        12 |   const struct attribute_group ** dev_groups
        16 |   const struct attribute_group ** drv_groups
        20 |   int (*)(struct device *, const struct device_driver *) match
        24 |   int (*)(const struct device *, struct kobj_uevent_env *) uevent
        28 |   int (*)(struct device *) probe
        32 |   void (*)(struct device *) sync_state
        36 |   void (*)(struct device *) remove
        40 |   void (*)(struct device *) shutdown
        44 |   int (*)(struct device *) online
        48 |   int (*)(struct device *) offline
        52 |   int (*)(struct device *, pm_message_t) suspend
        56 |   int (*)(struct device *) resume
        60 |   int (*)(struct device *) num_vf
        64 |   int (*)(struct device *) dma_configure
        68 |   void (*)(struct device *) dma_cleanup
        72 |   const struct dev_pm_ops * pm
        76 |   bool need_parent_lock
           | [sizeof=80, align=4]

*** Dumping AST Record Layout
         0 | struct klist_iter
         0 |   struct klist * i_klist
         4 |   struct klist_node * i_cur
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct class
         0 |   const char * name
         4 |   const struct attribute_group ** class_groups
         8 |   const struct attribute_group ** dev_groups
        12 |   int (*)(const struct device *, struct kobj_uevent_env *) dev_uevent
        16 |   char *(*)(const struct device *, umode_t *) devnode
        20 |   void (*)(const struct class *) class_release
        24 |   void (*)(struct device *) dev_release
        28 |   int (*)(struct device *) shutdown_pre
        32 |   const struct kobj_ns_type_operations * ns_type
        36 |   const void *(*)(const struct device *) namespace
        40 |   void (*)(const struct device *, kuid_t *, kgid_t *) get_ownership
        44 |   const struct dev_pm_ops * pm
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct class_attribute
         0 |   struct attribute attr
         0 |     const char * name
         4 |     umode_t mode
     6:0-0 |     bool ignore_lockdep
         8 |     struct lock_class_key * key
        12 |     struct lock_class_key skey
        12 |       union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2) 
        12 |         struct hlist_node hash_entry
        12 |           struct hlist_node * next
        16 |           struct hlist_node ** pprev
        12 |         struct lockdep_subclass_key[8] subkeys
        20 |   ssize_t (*)(const struct class *, const struct class_attribute *, char *) show
        24 |   ssize_t (*)(const struct class *, const struct class_attribute *, const char *, size_t) store
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3)
         0 |   struct ctl_table * ctl_table
         4 |   int ctl_table_size
         8 |   int used
        12 |   int count
        16 |   int nreg
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2)
         0 |   struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
         0 |     struct ctl_table * ctl_table
         4 |     int ctl_table_size
         8 |     int used
        12 |     int count
        16 |     int nreg
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct hlist_head
         0 |   struct hlist_node * first
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ctl_table_header
         0 |   union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2) 
         0 |     struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
         0 |       struct ctl_table * ctl_table
         4 |       int ctl_table_size
         8 |       int used
        12 |       int count
        16 |       int nreg
         0 |     struct callback_head rcu
         0 |       struct callback_head * next
         4 |       void (*)(struct callback_head *) func
        20 |   struct completion * unregistering
        24 |   const struct ctl_table * ctl_table_arg
        28 |   struct ctl_table_root * root
        32 |   struct ctl_table_set * set
        36 |   struct ctl_dir * parent
        40 |   struct ctl_node * node
        44 |   struct hlist_head inodes
        44 |     struct hlist_node * first
        48 |   enum (unnamed enum at ../include/linux/sysctl.h:187:2) type
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct ctl_dir
         0 |   struct ctl_table_header header
         0 |     union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2) 
         0 |       struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
         0 |         struct ctl_table * ctl_table
         4 |         int ctl_table_size
         8 |         int used
        12 |         int count
        16 |         int nreg
         0 |       struct callback_head rcu
         0 |         struct callback_head * next
         4 |         void (*)(struct callback_head *) func
        20 |     struct completion * unregistering
        24 |     const struct ctl_table * ctl_table_arg
        28 |     struct ctl_table_root * root
        32 |     struct ctl_table_set * set
        36 |     struct ctl_dir * parent
        40 |     struct ctl_node * node
        44 |     struct hlist_head inodes
        44 |       struct hlist_node * first
        48 |     enum (unnamed enum at ../include/linux/sysctl.h:187:2) type
        52 |   struct rb_root root
        52 |     struct rb_node * rb_node
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct ctl_table_set
         0 |   int (*)(struct ctl_table_set *) is_seen
         4 |   struct ctl_dir dir
         4 |     struct ctl_table_header header
         4 |       union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2) 
         4 |         struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
         4 |           struct ctl_table * ctl_table
         8 |           int ctl_table_size
        12 |           int used
        16 |           int count
        20 |           int nreg
         4 |         struct callback_head rcu
         4 |           struct callback_head * next
         8 |           void (*)(struct callback_head *) func
        24 |       struct completion * unregistering
        28 |       const struct ctl_table * ctl_table_arg
        32 |       struct ctl_table_root * root
        36 |       struct ctl_table_set * set
        40 |       struct ctl_dir * parent
        44 |       struct ctl_node * node
        48 |       struct hlist_head inodes
        48 |         struct hlist_node * first
        52 |       enum (unnamed enum at ../include/linux/sysctl.h:187:2) type
        56 |     struct rb_root root
        56 |       struct rb_node * rb_node
           | [sizeof=60, align=4]

*** Dumping AST Record Layout
         0 | union Elf32_Dyn::(unnamed at ../include/uapi/linux/elf.h:145:3)
         0 |   Elf32_Sword d_val
         0 |   Elf32_Addr d_ptr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | Elf32_Dyn
         0 |   Elf32_Sword d_tag
         4 |   union Elf32_Dyn::(unnamed at ../include/uapi/linux/elf.h:145:3) d_un
         4 |     Elf32_Sword d_val
         4 |     Elf32_Addr d_ptr
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union kernel_param::(anonymous at ../include/linux/moduleparam.h:76:2)
         0 |   void * arg
         0 |   const struct kparam_string * str
         0 |   const struct kparam_array * arr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct kernel_param
         0 |   const char * name
         4 |   struct module * mod
         8 |   const struct kernel_param_ops * ops
        12 |   const u16 perm
        14 |   s8 level
        15 |   u8 flags
        16 |   union kernel_param::(anonymous at ../include/linux/moduleparam.h:76:2) 
        16 |     void * arg
        16 |     const struct kparam_string * str
        16 |     const struct kparam_array * arr
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | seqcount_latch_t
         0 |   struct seqcount seqcount
         0 |     unsigned int sequence
         4 |     struct lockdep_map dep_map
         4 |       struct lock_class_key * key
         8 |       struct lock_class *[2] class_cache
        16 |       const char * name
        20 |       u8 wait_type_outer
        21 |       u8 wait_type_inner
        22 |       u8 lock_type
        24 |       int cpu
        28 |       unsigned long ip
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct latch_tree_node
         0 |   struct rb_node[2] node
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct latch_tree_root
         0 |   seqcount_latch_t seq
         0 |     struct seqcount seqcount
         0 |       unsigned int sequence
         4 |       struct lockdep_map dep_map
         4 |         struct lock_class_key * key
         8 |         struct lock_class *[2] class_cache
        16 |         const char * name
        20 |         u8 wait_type_outer
        21 |         u8 wait_type_inner
        22 |         u8 lock_type
        24 |         int cpu
        28 |         unsigned long ip
        32 |   struct rb_root[2] tree
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | union ddebug_class_param::(anonymous at ../include/linux/dynamic_debug.h:125:2)
         0 |   unsigned long * bits
         0 |   unsigned int * lvl
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct module_attribute
         0 |   struct attribute attr
         0 |     const char * name
         4 |     umode_t mode
     6:0-0 |     bool ignore_lockdep
         8 |     struct lock_class_key * key
        12 |     struct lock_class_key skey
        12 |       union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2) 
        12 |         struct hlist_node hash_entry
        12 |           struct hlist_node * next
        16 |           struct hlist_node ** pprev
        12 |         struct lockdep_subclass_key[8] subkeys
        20 |   ssize_t (*)(struct module_attribute *, struct module_kobject *, char *) show
        24 |   ssize_t (*)(struct module_attribute *, struct module_kobject *, const char *, size_t) store
        28 |   void (*)(struct module *, const char *) setup
        32 |   int (*)(struct module *) test
        36 |   void (*)(struct module *) free
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct device_driver
         0 |   const char * name
         4 |   const struct bus_type * bus
         8 |   struct module * owner
        12 |   const char * mod_name
        16 |   bool suppress_bind_attrs
        20 |   enum probe_type probe_type
        24 |   const struct of_device_id * of_match_table
        28 |   const struct acpi_device_id * acpi_match_table
        32 |   int (*)(struct device *) probe
        36 |   void (*)(struct device *) sync_state
        40 |   int (*)(struct device *) remove
        44 |   void (*)(struct device *) shutdown
        48 |   int (*)(struct device *, pm_message_t) suspend
        52 |   int (*)(struct device *) resume
        56 |   const struct attribute_group ** groups
        60 |   const struct attribute_group ** dev_groups
        64 |   const struct dev_pm_ops * pm
        68 |   void (*)(struct device *) coredump
        72 |   struct driver_private * p
           | [sizeof=76, align=4]

*** Dumping AST Record Layout
         0 | struct device_attribute
         0 |   struct attribute attr
         0 |     const char * name
         4 |     umode_t mode
     6:0-0 |     bool ignore_lockdep
         8 |     struct lock_class_key * key
        12 |     struct lock_class_key skey
        12 |       union lock_class_key::(anonymous at ../include/linux/lockdep_types.h:76:2) 
        12 |         struct hlist_node hash_entry
        12 |           struct hlist_node * next
        16 |           struct hlist_node ** pprev
        12 |         struct lockdep_subclass_key[8] subkeys
        20 |   ssize_t (*)(struct device *, struct device_attribute *, char *) show
        24 |   ssize_t (*)(struct device *, struct device_attribute *, const char *, size_t) store
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct dev_links_info
         0 |   struct list_head suppliers
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct list_head consumers
         8 |     struct list_head * next
        12 |     struct list_head * prev
        16 |   struct list_head defer_sync
        16 |     struct list_head * next
        20 |     struct list_head * prev
        24 |   enum dl_dev_state status
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct dev_pm_info
         0 |   struct pm_message power_state
         0 |     int event
     4:0-0 |   bool can_wakeup
     4:1-1 |   bool async_suspend
     4:2-2 |   bool in_dpm_list
     4:3-3 |   bool is_prepared
     4:4-4 |   bool is_suspended
     4:5-5 |   bool is_noirq_suspended
     4:6-6 |   bool is_late_suspended
     4:7-7 |   bool no_pm
     5:0-0 |   bool early_init
     5:1-1 |   bool direct_complete
         8 |   u32 driver_flags
        12 |   struct spinlock lock
        12 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        12 |       struct raw_spinlock rlock
        12 |         arch_spinlock_t raw_lock
        12 |           volatile unsigned int slock
        16 |         unsigned int magic
        20 |         unsigned int owner_cpu
        24 |         void * owner
        28 |         struct lockdep_map dep_map
        28 |           struct lock_class_key * key
        32 |           struct lock_class *[2] class_cache
        40 |           const char * name
        44 |           u8 wait_type_outer
        45 |           u8 wait_type_inner
        46 |           u8 lock_type
        48 |           int cpu
        52 |           unsigned long ip
        12 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        12 |         u8[16] __padding
        28 |         struct lockdep_map dep_map
        28 |           struct lock_class_key * key
        32 |           struct lock_class *[2] class_cache
        40 |           const char * name
        44 |           u8 wait_type_outer
        45 |           u8 wait_type_inner
        46 |           u8 lock_type
        48 |           int cpu
        52 |           unsigned long ip
    56:0-0 |   bool should_wakeup
        60 |   struct pm_subsys_data * subsys_data
        64 |   void (*)(struct device *, s32) set_latency_tolerance
        68 |   struct dev_pm_qos * qos
           | [sizeof=72, align=4]

*** Dumping AST Record Layout
         0 | struct dev_msi_info
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct dev_archdata
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct device
         0 |   struct kobject kobj
         0 |     const char * name
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     struct kobject * parent
        16 |     struct kset * kset
        20 |     const struct kobj_type * ktype
        24 |     struct kernfs_node * sd
        28 |     struct kref kref
        28 |       struct refcount_struct refcount
        28 |         atomic_t refs
        28 |           int counter
    32:0-0 |     unsigned int state_initialized
    32:1-1 |     unsigned int state_in_sysfs
    32:2-2 |     unsigned int state_add_uevent_sent
    32:3-3 |     unsigned int state_remove_uevent_sent
    32:4-4 |     unsigned int uevent_suppress
        36 |     struct delayed_work release
        36 |       struct work_struct work
        36 |         atomic_t data
        36 |           int counter
        40 |         struct list_head entry
        40 |           struct list_head * next
        44 |           struct list_head * prev
        48 |         work_func_t func
        52 |         struct lockdep_map lockdep_map
        52 |           struct lock_class_key * key
        56 |           struct lock_class *[2] class_cache
        64 |           const char * name
        68 |           u8 wait_type_outer
        69 |           u8 wait_type_inner
        70 |           u8 lock_type
        72 |           int cpu
        76 |           unsigned long ip
        80 |       struct timer_list timer
        80 |         struct hlist_node entry
        80 |           struct hlist_node * next
        84 |           struct hlist_node ** pprev
        88 |         unsigned long expires
        92 |         void (*)(struct timer_list *) function
        96 |         u32 flags
       100 |         struct lockdep_map lockdep_map
       100 |           struct lock_class_key * key
       104 |           struct lock_class *[2] class_cache
       112 |           const char * name
       116 |           u8 wait_type_outer
       117 |           u8 wait_type_inner
       118 |           u8 lock_type
       120 |           int cpu
       124 |           unsigned long ip
       128 |       struct workqueue_struct * wq
       132 |       int cpu
       136 |   struct device * parent
       140 |   struct device_private * p
       144 |   const char * init_name
       148 |   const struct device_type * type
       152 |   const struct bus_type * bus
       156 |   struct device_driver * driver
       160 |   void * platform_data
       164 |   void * driver_data
       168 |   struct mutex mutex
       168 |     atomic_t owner
       168 |       int counter
       172 |     struct raw_spinlock wait_lock
       172 |       arch_spinlock_t raw_lock
       172 |         volatile unsigned int slock
       176 |       unsigned int magic
       180 |       unsigned int owner_cpu
       184 |       void * owner
       188 |       struct lockdep_map dep_map
       188 |         struct lock_class_key * key
       192 |         struct lock_class *[2] class_cache
       200 |         const char * name
       204 |         u8 wait_type_outer
       205 |         u8 wait_type_inner
       206 |         u8 lock_type
       208 |         int cpu
       212 |         unsigned long ip
       216 |     struct list_head wait_list
       216 |       struct list_head * next
       220 |       struct list_head * prev
       224 |     void * magic
       228 |     struct lockdep_map dep_map
       228 |       struct lock_class_key * key
       232 |       struct lock_class *[2] class_cache
       240 |       const char * name
       244 |       u8 wait_type_outer
       245 |       u8 wait_type_inner
       246 |       u8 lock_type
       248 |       int cpu
       252 |       unsigned long ip
       256 |   struct dev_links_info links
       256 |     struct list_head suppliers
       256 |       struct list_head * next
       260 |       struct list_head * prev
       264 |     struct list_head consumers
       264 |       struct list_head * next
       268 |       struct list_head * prev
       272 |     struct list_head defer_sync
       272 |       struct list_head * next
       276 |       struct list_head * prev
       280 |     enum dl_dev_state status
       284 |   struct dev_pm_info power
       284 |     struct pm_message power_state
       284 |       int event
   288:0-0 |     bool can_wakeup
   288:1-1 |     bool async_suspend
   288:2-2 |     bool in_dpm_list
   288:3-3 |     bool is_prepared
   288:4-4 |     bool is_suspended
   288:5-5 |     bool is_noirq_suspended
   288:6-6 |     bool is_late_suspended
   288:7-7 |     bool no_pm
   289:0-0 |     bool early_init
   289:1-1 |     bool direct_complete
       292 |     u32 driver_flags
       296 |     struct spinlock lock
       296 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       296 |         struct raw_spinlock rlock
       296 |           arch_spinlock_t raw_lock
       296 |             volatile unsigned int slock
       300 |           unsigned int magic
       304 |           unsigned int owner_cpu
       308 |           void * owner
       312 |           struct lockdep_map dep_map
       312 |             struct lock_class_key * key
       316 |             struct lock_class *[2] class_cache
       324 |             const char * name
       328 |             u8 wait_type_outer
       329 |             u8 wait_type_inner
       330 |             u8 lock_type
       332 |             int cpu
       336 |             unsigned long ip
       296 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       296 |           u8[16] __padding
       312 |           struct lockdep_map dep_map
       312 |             struct lock_class_key * key
       316 |             struct lock_class *[2] class_cache
       324 |             const char * name
       328 |             u8 wait_type_outer
       329 |             u8 wait_type_inner
       330 |             u8 lock_type
       332 |             int cpu
       336 |             unsigned long ip
   340:0-0 |     bool should_wakeup
       344 |     struct pm_subsys_data * subsys_data
       348 |     void (*)(struct device *, s32) set_latency_tolerance
       352 |     struct dev_pm_qos * qos
       356 |   struct dev_pm_domain * pm_domain
       360 |   struct dev_msi_info msi
       360 |   u64 * dma_mask
       368 |   u64 coherent_dma_mask
       376 |   u64 bus_dma_limit
       384 |   const struct bus_dma_region * dma_range_map
       388 |   struct device_dma_parameters * dma_parms
       392 |   struct list_head dma_pools
       392 |     struct list_head * next
       396 |     struct list_head * prev
       400 |   struct dma_coherent_mem * dma_mem
       404 |   struct dev_archdata archdata
       404 |   struct device_node * of_node
       408 |   struct fwnode_handle * fwnode
       412 |   dev_t devt
       416 |   u32 id
       420 |   struct spinlock devres_lock
       420 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       420 |       struct raw_spinlock rlock
       420 |         arch_spinlock_t raw_lock
       420 |           volatile unsigned int slock
       424 |         unsigned int magic
       428 |         unsigned int owner_cpu
       432 |         void * owner
       436 |         struct lockdep_map dep_map
       436 |           struct lock_class_key * key
       440 |           struct lock_class *[2] class_cache
       448 |           const char * name
       452 |           u8 wait_type_outer
       453 |           u8 wait_type_inner
       454 |           u8 lock_type
       456 |           int cpu
       460 |           unsigned long ip
       420 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       420 |         u8[16] __padding
       436 |         struct lockdep_map dep_map
       436 |           struct lock_class_key * key
       440 |           struct lock_class *[2] class_cache
       448 |           const char * name
       452 |           u8 wait_type_outer
       453 |           u8 wait_type_inner
       454 |           u8 lock_type
       456 |           int cpu
       460 |           unsigned long ip
       464 |   struct list_head devres_head
       464 |     struct list_head * next
       468 |     struct list_head * prev
       472 |   const struct class * class
       476 |   const struct attribute_group ** groups
       480 |   void (*)(struct device *) release
       484 |   struct iommu_group * iommu_group
       488 |   struct dev_iommu * iommu
       492 |   struct device_physical_location * physical_location
       496 |   enum device_removable removable
   500:0-0 |   bool offline_disabled
   500:1-1 |   bool offline
   500:2-2 |   bool of_node_reused
   500:3-3 |   bool state_synced
   500:4-4 |   bool can_match
   500:5-5 |   bool dma_coherent
   500:6-6 |   bool dma_skip_sync
           | [sizeof=504, align=8]

*** Dumping AST Record Layout
         0 | struct wakeup_source
         0 |   const char * name
         4 |   int id
         8 |   struct list_head entry
         8 |     struct list_head * next
        12 |     struct list_head * prev
        16 |   struct spinlock lock
        16 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        16 |       struct raw_spinlock rlock
        16 |         arch_spinlock_t raw_lock
        16 |           volatile unsigned int slock
        20 |         unsigned int magic
        24 |         unsigned int owner_cpu
        28 |         void * owner
        32 |         struct lockdep_map dep_map
        32 |           struct lock_class_key * key
        36 |           struct lock_class *[2] class_cache
        44 |           const char * name
        48 |           u8 wait_type_outer
        49 |           u8 wait_type_inner
        50 |           u8 lock_type
        52 |           int cpu
        56 |           unsigned long ip
        16 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        16 |         u8[16] __padding
        32 |         struct lockdep_map dep_map
        32 |           struct lock_class_key * key
        36 |           struct lock_class *[2] class_cache
        44 |           const char * name
        48 |           u8 wait_type_outer
        49 |           u8 wait_type_inner
        50 |           u8 lock_type
        52 |           int cpu
        56 |           unsigned long ip
        60 |   struct wake_irq * wakeirq
        64 |   struct timer_list timer
        64 |     struct hlist_node entry
        64 |       struct hlist_node * next
        68 |       struct hlist_node ** pprev
        72 |     unsigned long expires
        76 |     void (*)(struct timer_list *) function
        80 |     u32 flags
        84 |     struct lockdep_map lockdep_map
        84 |       struct lock_class_key * key
        88 |       struct lock_class *[2] class_cache
        96 |       const char * name
       100 |       u8 wait_type_outer
       101 |       u8 wait_type_inner
       102 |       u8 lock_type
       104 |       int cpu
       108 |       unsigned long ip
       112 |   unsigned long timer_expires
       120 |   ktime_t total_time
       128 |   ktime_t max_time
       136 |   ktime_t last_time
       144 |   ktime_t start_prevent_time
       152 |   ktime_t prevent_sleep_time
       160 |   unsigned long event_count
       164 |   unsigned long active_count
       168 |   unsigned long relax_count
       172 |   unsigned long expire_count
       176 |   unsigned long wakeup_count
       180 |   struct device * dev
   184:0-0 |   bool active
   184:1-1 |   bool autosleep_enabled
           | [sizeof=192, align=8]

*** Dumping AST Record Layout
         0 | struct rhash_head
         0 |   struct rhash_head * next
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct rhashtable_params
         0 |   u16 nelem_hint
         2 |   u16 key_len
         4 |   u16 key_offset
         6 |   u16 head_offset
         8 |   unsigned int max_size
        12 |   u16 min_size
        14 |   bool automatic_shrinking
        16 |   rht_hashfn_t hashfn
        20 |   rht_obj_hashfn_t obj_hashfn
        24 |   rht_obj_cmpfn_t obj_cmpfn
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct rhashtable
         0 |   struct bucket_table * tbl
         4 |   unsigned int key_len
         8 |   unsigned int max_elems
        12 |   struct rhashtable_params p
        12 |     u16 nelem_hint
        14 |     u16 key_len
        16 |     u16 key_offset
        18 |     u16 head_offset
        20 |     unsigned int max_size
        24 |     u16 min_size
        26 |     bool automatic_shrinking
        28 |     rht_hashfn_t hashfn
        32 |     rht_obj_hashfn_t obj_hashfn
        36 |     rht_obj_cmpfn_t obj_cmpfn
        40 |   bool rhlist
        44 |   struct work_struct run_work
        44 |     atomic_t data
        44 |       int counter
        48 |     struct list_head entry
        48 |       struct list_head * next
        52 |       struct list_head * prev
        56 |     work_func_t func
        60 |     struct lockdep_map lockdep_map
        60 |       struct lock_class_key * key
        64 |       struct lock_class *[2] class_cache
        72 |       const char * name
        76 |       u8 wait_type_outer
        77 |       u8 wait_type_inner
        78 |       u8 lock_type
        80 |       int cpu
        84 |       unsigned long ip
        88 |   struct mutex mutex
        88 |     atomic_t owner
        88 |       int counter
        92 |     struct raw_spinlock wait_lock
        92 |       arch_spinlock_t raw_lock
        92 |         volatile unsigned int slock
        96 |       unsigned int magic
       100 |       unsigned int owner_cpu
       104 |       void * owner
       108 |       struct lockdep_map dep_map
       108 |         struct lock_class_key * key
       112 |         struct lock_class *[2] class_cache
       120 |         const char * name
       124 |         u8 wait_type_outer
       125 |         u8 wait_type_inner
       126 |         u8 lock_type
       128 |         int cpu
       132 |         unsigned long ip
       136 |     struct list_head wait_list
       136 |       struct list_head * next
       140 |       struct list_head * prev
       144 |     void * magic
       148 |     struct lockdep_map dep_map
       148 |       struct lock_class_key * key
       152 |       struct lock_class *[2] class_cache
       160 |       const char * name
       164 |       u8 wait_type_outer
       165 |       u8 wait_type_inner
       166 |       u8 lock_type
       168 |       int cpu
       172 |       unsigned long ip
       176 |   struct spinlock lock
       176 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       176 |       struct raw_spinlock rlock
       176 |         arch_spinlock_t raw_lock
       176 |           volatile unsigned int slock
       180 |         unsigned int magic
       184 |         unsigned int owner_cpu
       188 |         void * owner
       192 |         struct lockdep_map dep_map
       192 |           struct lock_class_key * key
       196 |           struct lock_class *[2] class_cache
       204 |           const char * name
       208 |           u8 wait_type_outer
       209 |           u8 wait_type_inner
       210 |           u8 lock_type
       212 |           int cpu
       216 |           unsigned long ip
       176 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       176 |         u8[16] __padding
       192 |         struct lockdep_map dep_map
       192 |           struct lock_class_key * key
       196 |           struct lock_class *[2] class_cache
       204 |           const char * name
       208 |           u8 wait_type_outer
       209 |           u8 wait_type_inner
       210 |           u8 lock_type
       212 |           int cpu
       216 |           unsigned long ip
       220 |   atomic_t nelems
       220 |     int counter
           | [sizeof=224, align=4]

*** Dumping AST Record Layout
         0 | struct ipc_perm
         0 |   __kernel_key_t key
         4 |   __kernel_uid_t uid
         8 |   __kernel_gid_t gid
        12 |   __kernel_uid_t cuid
        16 |   __kernel_gid_t cgid
        20 |   __kernel_mode_t mode
        24 |   unsigned short seq
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct ipc64_perm
         0 |   __kernel_key_t key
         4 |   __kernel_uid32_t uid
         8 |   __kernel_gid32_t gid
        12 |   __kernel_uid32_t cuid
        16 |   __kernel_gid32_t cgid
        20 |   __kernel_mode_t mode
        24 |   unsigned char[0] __pad1
        24 |   unsigned short seq
        26 |   unsigned short __pad2
        28 |   __kernel_ulong_t __unused1
        32 |   __kernel_ulong_t __unused2
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | union iov_iter::(anonymous at ../include/linux/uio.h:64:4)
         0 |   const struct iovec * __iov
         0 |   const struct kvec * kvec
         0 |   const struct bio_vec * bvec
         0 |   struct xarray * xarray
         0 |   void * ubuf
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct iovec
         0 |   void * iov_base
         4 |   __kernel_size_t iov_len
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct iov_iter::(anonymous at ../include/linux/uio.h:63:3)
         0 |   union iov_iter::(anonymous at ../include/linux/uio.h:64:4) 
         0 |     const struct iovec * __iov
         0 |     const struct kvec * kvec
         0 |     const struct bio_vec * bvec
         0 |     struct xarray * xarray
         0 |     void * ubuf
         4 |   size_t count
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union iov_iter::(anonymous at ../include/linux/uio.h:56:2)
         0 |   struct iovec __ubuf_iovec
         0 |     void * iov_base
         4 |     __kernel_size_t iov_len
         0 |   struct iov_iter::(anonymous at ../include/linux/uio.h:63:3) 
         0 |     union iov_iter::(anonymous at ../include/linux/uio.h:64:4) 
         0 |       const struct iovec * __iov
         0 |       const struct kvec * kvec
         0 |       const struct bio_vec * bvec
         0 |       struct xarray * xarray
         0 |       void * ubuf
         4 |     size_t count
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union iov_iter::(anonymous at ../include/linux/uio.h:75:2)
         0 |   unsigned long nr_segs
         0 |   loff_t xarray_start
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct iov_iter
         0 |   u8 iter_type
         1 |   bool nofault
         2 |   bool data_source
         4 |   size_t iov_offset
         8 |   union iov_iter::(anonymous at ../include/linux/uio.h:56:2) 
         8 |     struct iovec __ubuf_iovec
         8 |       void * iov_base
        12 |       __kernel_size_t iov_len
         8 |     struct iov_iter::(anonymous at ../include/linux/uio.h:63:3) 
         8 |       union iov_iter::(anonymous at ../include/linux/uio.h:64:4) 
         8 |         const struct iovec * __iov
         8 |         const struct kvec * kvec
         8 |         const struct bio_vec * bvec
         8 |         struct xarray * xarray
         8 |         void * ubuf
        12 |       size_t count
        16 |   union iov_iter::(anonymous at ../include/linux/uio.h:75:2) 
        16 |     unsigned long nr_segs
        16 |     loff_t xarray_start
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3)
         0 |   __kernel_sa_family_t ss_family
         2 |   char[126] __data
           | [sizeof=128, align=2]

*** Dumping AST Record Layout
         0 | union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2)
         0 |   struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         0 |     __kernel_sa_family_t ss_family
         2 |     char[126] __data
         0 |   void * __align
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct sockaddr::(unnamed at ../include/linux/socket.h:39:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct user_msghdr
         0 |   void * msg_name
         4 |   int msg_namelen
         8 |   struct iovec * msg_iov
        12 |   __kernel_size_t msg_iovlen
        16 |   void * msg_control
        20 |   __kernel_size_t msg_controllen
        24 |   unsigned int msg_flags
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct cmsghdr
         0 |   __kernel_size_t cmsg_len
         4 |   int cmsg_level
         8 |   int cmsg_type
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct sockaddr::(anonymous at ../include/linux/socket.h:39:3)
         0 |   struct sockaddr::(unnamed at ../include/linux/socket.h:39:3) __empty_sa_data
         0 |   char[] sa_data
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | union sockaddr::(anonymous at ../include/linux/socket.h:37:2)
         0 |   char[14] sa_data_min
         0 |   struct sockaddr::(anonymous at ../include/linux/socket.h:39:3) 
         0 |     struct sockaddr::(unnamed at ../include/linux/socket.h:39:3) __empty_sa_data
         0 |     char[] sa_data
           | [sizeof=14, align=1]

*** Dumping AST Record Layout
         0 | struct sockaddr
         0 |   sa_family_t sa_family
         2 |   union sockaddr::(anonymous at ../include/linux/socket.h:37:2) 
         2 |     char[14] sa_data_min
         2 |     struct sockaddr::(anonymous at ../include/linux/socket.h:39:3) 
         2 |       struct sockaddr::(unnamed at ../include/linux/socket.h:39:3) __empty_sa_data
         2 |       char[] sa_data
           | [sizeof=16, align=2]

*** Dumping AST Record Layout
         0 | union ifreq::(unnamed at ../include/uapi/linux/if.h:236:2)
         0 |   char[16] ifrn_name
           | [sizeof=16, align=1]

*** Dumping AST Record Layout
         0 | struct wait_bit_key
         0 |   void * flags
         4 |   int bit_nr
         8 |   unsigned long timeout
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct hlist_bl_head
         0 |   struct hlist_bl_node * first
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct hlist_bl_node
         0 |   struct hlist_bl_node * next
         4 |   struct hlist_bl_node ** pprev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct lockref::(anonymous at ../include/linux/lockref.h:30:3)
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   int count
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | union lockref::(anonymous at ../include/linux/lockref.h:26:2)
         0 |   struct lockref::(anonymous at ../include/linux/lockref.h:30:3) 
         0 |     struct spinlock lock
         0 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |         struct raw_spinlock rlock
         0 |           arch_spinlock_t raw_lock
         0 |             volatile unsigned int slock
         4 |           unsigned int magic
         8 |           unsigned int owner_cpu
        12 |           void * owner
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |           u8[16] __padding
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
        44 |     int count
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct qstr::(anonymous at ../include/linux/dcache.h:51:3)
         0 |   u32 hash
         4 |   u32 len
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union qstr::(anonymous at ../include/linux/dcache.h:50:2)
         0 |   struct qstr::(anonymous at ../include/linux/dcache.h:51:3) 
         0 |     u32 hash
         4 |     u32 len
         0 |   u64 hash_len
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct lockref
         0 |   union lockref::(anonymous at ../include/linux/lockref.h:26:2) 
         0 |     struct lockref::(anonymous at ../include/linux/lockref.h:30:3) 
         0 |       struct spinlock lock
         0 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |           struct raw_spinlock rlock
         0 |             arch_spinlock_t raw_lock
         0 |               volatile unsigned int slock
         4 |             unsigned int magic
         8 |             unsigned int owner_cpu
        12 |             void * owner
        16 |             struct lockdep_map dep_map
        16 |               struct lock_class_key * key
        20 |               struct lock_class *[2] class_cache
        28 |               const char * name
        32 |               u8 wait_type_outer
        33 |               u8 wait_type_inner
        34 |               u8 lock_type
        36 |               int cpu
        40 |               unsigned long ip
         0 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |             u8[16] __padding
        16 |             struct lockdep_map dep_map
        16 |               struct lock_class_key * key
        20 |               struct lock_class *[2] class_cache
        28 |               const char * name
        32 |               u8 wait_type_outer
        33 |               u8 wait_type_inner
        34 |               u8 lock_type
        36 |               int cpu
        40 |               unsigned long ip
        44 |       int count
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct qstr
         0 |   union qstr::(anonymous at ../include/linux/dcache.h:50:2) 
         0 |     struct qstr::(anonymous at ../include/linux/dcache.h:51:3) 
         0 |       u32 hash
         4 |       u32 len
         0 |     u64 hash_len
         8 |   const unsigned char * name
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | union dentry::(anonymous at ../include/linux/dcache.h:105:2)
         0 |   struct list_head d_lru
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   wait_queue_head_t * d_wait
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union dentry::(unnamed at ../include/linux/dcache.h:114:2)
         0 |   struct hlist_node d_alias
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         0 |   struct hlist_bl_node d_in_lookup_hash
         0 |     struct hlist_bl_node * next
         4 |     struct hlist_bl_node ** pprev
         0 |   struct callback_head d_rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct dentry
         0 |   unsigned int d_flags
         4 |   struct seqcount_spinlock d_seq
         4 |     struct seqcount seqcount
         4 |       unsigned int sequence
         8 |       struct lockdep_map dep_map
         8 |         struct lock_class_key * key
        12 |         struct lock_class *[2] class_cache
        20 |         const char * name
        24 |         u8 wait_type_outer
        25 |         u8 wait_type_inner
        26 |         u8 lock_type
        28 |         int cpu
        32 |         unsigned long ip
        36 |     spinlock_t * lock
        40 |   struct hlist_bl_node d_hash
        40 |     struct hlist_bl_node * next
        44 |     struct hlist_bl_node ** pprev
        48 |   struct dentry * d_parent
        56 |   struct qstr d_name
        56 |     union qstr::(anonymous at ../include/linux/dcache.h:50:2) 
        56 |       struct qstr::(anonymous at ../include/linux/dcache.h:51:3) 
        56 |         u32 hash
        60 |         u32 len
        56 |       u64 hash_len
        64 |     const unsigned char * name
        72 |   struct inode * d_inode
        76 |   unsigned char[44] d_iname
       120 |   const struct dentry_operations * d_op
       124 |   struct super_block * d_sb
       128 |   unsigned long d_time
       132 |   void * d_fsdata
       136 |   struct lockref d_lockref
       136 |     union lockref::(anonymous at ../include/linux/lockref.h:26:2) 
       136 |       struct lockref::(anonymous at ../include/linux/lockref.h:30:3) 
       136 |         struct spinlock lock
       136 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       136 |             struct raw_spinlock rlock
       136 |               arch_spinlock_t raw_lock
       136 |                 volatile unsigned int slock
       140 |               unsigned int magic
       144 |               unsigned int owner_cpu
       148 |               void * owner
       152 |               struct lockdep_map dep_map
       152 |                 struct lock_class_key * key
       156 |                 struct lock_class *[2] class_cache
       164 |                 const char * name
       168 |                 u8 wait_type_outer
       169 |                 u8 wait_type_inner
       170 |                 u8 lock_type
       172 |                 int cpu
       176 |                 unsigned long ip
       136 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       136 |               u8[16] __padding
       152 |               struct lockdep_map dep_map
       152 |                 struct lock_class_key * key
       156 |                 struct lock_class *[2] class_cache
       164 |                 const char * name
       168 |                 u8 wait_type_outer
       169 |                 u8 wait_type_inner
       170 |                 u8 lock_type
       172 |                 int cpu
       176 |                 unsigned long ip
       180 |         int count
       184 |   union dentry::(anonymous at ../include/linux/dcache.h:105:2) 
       184 |     struct list_head d_lru
       184 |       struct list_head * next
       188 |       struct list_head * prev
       184 |     wait_queue_head_t * d_wait
       192 |   struct hlist_node d_sib
       192 |     struct hlist_node * next
       196 |     struct hlist_node ** pprev
       200 |   struct hlist_head d_children
       200 |     struct hlist_node * first
       204 |   union dentry::(unnamed at ../include/linux/dcache.h:114:2) d_u
       204 |     struct hlist_node d_alias
       204 |       struct hlist_node * next
       208 |       struct hlist_node ** pprev
       204 |     struct hlist_bl_node d_in_lookup_hash
       204 |       struct hlist_bl_node * next
       208 |       struct hlist_bl_node ** pprev
       204 |     struct callback_head d_rcu
       204 |       struct callback_head * next
       208 |       void (*)(struct callback_head *) func
           | [sizeof=216, align=8]

*** Dumping AST Record Layout
         0 | struct path
         0 |   struct vfsmount * mnt
         4 |   struct dentry * dentry
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct completion
         0 |   unsigned int done
         4 |   struct swait_queue_head wait
         4 |     struct raw_spinlock lock
         4 |       arch_spinlock_t raw_lock
         4 |         volatile unsigned int slock
         8 |       unsigned int magic
        12 |       unsigned int owner_cpu
        16 |       void * owner
        20 |       struct lockdep_map dep_map
        20 |         struct lock_class_key * key
        24 |         struct lock_class *[2] class_cache
        32 |         const char * name
        36 |         u8 wait_type_outer
        37 |         u8 wait_type_inner
        38 |         u8 lock_type
        40 |         int cpu
        44 |         unsigned long ip
        48 |     struct list_head task_list
        48 |       struct list_head * next
        52 |       struct list_head * prev
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct list_lru_one
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   long nr_items
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct list_lru
         0 |   struct list_lru_node * node
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct upid
         0 |   int nr
         4 |   struct pid_namespace * ns
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct vfs_cap_data::(unnamed at ../include/uapi/linux/capability.h:75:2)
         0 |   __le32 permitted
         4 |   __le32 inheritable
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct vfs_ns_cap_data::(unnamed at ../include/uapi/linux/capability.h:86:2)
         0 |   __le32 permitted
         4 |   __le32 inheritable
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | kernel_cap_t
         0 |   u64 val
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct kernel_siginfo
         0 |   struct kernel_siginfo::(anonymous at ../include/linux/signal_types.h:13:2) 
         0 |     int si_signo
         4 |     int si_errno
         8 |     int si_code
        12 |     union __sifields _sifields
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
        12 |         __kernel_timer_t _tid
        16 |         int _overrun
        20 |         union sigval _sigval
        20 |           int sival_int
        20 |           void * sival_ptr
        24 |         int _sys_private
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        20 |         union sigval _sigval
        20 |           int sival_int
        20 |           void * sival_ptr
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
        12 |         __kernel_pid_t _pid
        16 |         __kernel_uid32_t _uid
        20 |         int _status
        24 |         __kernel_clock_t _utime
        28 |         __kernel_clock_t _stime
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
        12 |         void * _addr
        16 |         union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
        16 |           int _trapno
        16 |           short _addr_lsb
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
        16 |             char[4] _dummy_bnd
        20 |             void * _lower
        24 |             void * _upper
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
        16 |             char[4] _dummy_pkey
        20 |             __u32 _pkey
        16 |           struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
        16 |             unsigned long _data
        20 |             __u32 _type
        24 |             __u32 _flags
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
        12 |         long _band
        16 |         int _fd
        12 |       struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
        12 |         void * _call_addr
        16 |         int _syscall
        20 |         unsigned int _arch
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct siginfo
         0 |   union siginfo::(anonymous at ../include/uapi/asm-generic/siginfo.h:135:2) 
         0 |     struct siginfo::(anonymous at ../include/uapi/asm-generic/siginfo.h:136:3) 
         0 |       int si_signo
         4 |       int si_errno
         8 |       int si_code
        12 |       union __sifields _sifields
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:39:2) _kill
        12 |           __kernel_pid_t _pid
        16 |           __kernel_uid32_t _uid
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:45:2) _timer
        12 |           __kernel_timer_t _tid
        16 |           int _overrun
        20 |           union sigval _sigval
        20 |             int sival_int
        20 |             void * sival_ptr
        24 |           int _sys_private
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:53:2) _rt
        12 |           __kernel_pid_t _pid
        16 |           __kernel_uid32_t _uid
        20 |           union sigval _sigval
        20 |             int sival_int
        20 |             void * sival_ptr
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:60:2) _sigchld
        12 |           __kernel_pid_t _pid
        16 |           __kernel_uid32_t _uid
        20 |           int _status
        24 |           __kernel_clock_t _utime
        28 |           __kernel_clock_t _stime
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:69:2) _sigfault
        12 |           void * _addr
        16 |           union __sifields::(anonymous at ../include/uapi/asm-generic/siginfo.h:74:3) 
        16 |             int _trapno
        16 |             short _addr_lsb
        16 |             struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:83:4) _addr_bnd
        16 |               char[4] _dummy_bnd
        20 |               void * _lower
        24 |               void * _upper
        16 |             struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:89:4) _addr_pkey
        16 |               char[4] _dummy_pkey
        20 |               __u32 _pkey
        16 |             struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:94:4) _perf
        16 |               unsigned long _data
        20 |               __u32 _type
        24 |               __u32 _flags
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:103:2) _sigpoll
        12 |           long _band
        16 |           int _fd
        12 |         struct __sifields::(unnamed at ../include/uapi/asm-generic/siginfo.h:109:2) _sigsys
        12 |           void * _call_addr
        16 |           int _syscall
        20 |           unsigned int _arch
         0 |     int[32] _si_pad
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct keyring_index_key::(anonymous at ../include/linux/key.h:118:3)
         0 |   u16 desc_len
         2 |   char[2] desc
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union keyring_index_key::(anonymous at ../include/linux/key.h:117:2)
         0 |   struct keyring_index_key::(anonymous at ../include/linux/key.h:118:3) 
         0 |     u16 desc_len
         2 |     char[2] desc
         0 |   unsigned long x
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct keyring_index_key
         0 |   unsigned long hash
         4 |   union keyring_index_key::(anonymous at ../include/linux/key.h:117:2) 
         4 |     struct keyring_index_key::(anonymous at ../include/linux/key.h:118:3) 
         4 |       u16 desc_len
         6 |       char[2] desc
         4 |     unsigned long x
         8 |   struct key_type * type
        12 |   struct key_tag * domain_tag
        16 |   const char * description
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | union key_payload
         0 |   void * rcu_data0
         0 |   void *[4] data
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union key::(anonymous at ../include/linux/key.h:198:2)
         0 |   struct list_head graveyard_link
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct rb_node serial_node
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union key::(anonymous at ../include/linux/key.h:208:2)
         0 |   time64_t expiry
         0 |   time64_t revoked_at
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | kuid_t
         0 |   uid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | kgid_t
         0 |   gid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct key::(anonymous at ../include/linux/key.h:247:3)
         0 |   unsigned long hash
         4 |   unsigned long len_desc
         8 |   struct key_type * type
        12 |   struct key_tag * domain_tag
        16 |   char * description
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | union key::(anonymous at ../include/linux/key.h:245:2)
         0 |   struct keyring_index_key index_key
         0 |     unsigned long hash
         4 |     union keyring_index_key::(anonymous at ../include/linux/key.h:117:2) 
         4 |       struct keyring_index_key::(anonymous at ../include/linux/key.h:118:3) 
         4 |         u16 desc_len
         6 |         char[2] desc
         4 |       unsigned long x
         8 |     struct key_type * type
        12 |     struct key_tag * domain_tag
        16 |     const char * description
         0 |   struct key::(anonymous at ../include/linux/key.h:247:3) 
         0 |     unsigned long hash
         4 |     unsigned long len_desc
         8 |     struct key_type * type
        12 |     struct key_tag * domain_tag
        16 |     char * description
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct assoc_array
         0 |   struct assoc_array_ptr * root
         4 |   unsigned long nr_leaves_on_tree
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct key::(anonymous at ../include/linux/key.h:262:3)
         0 |   struct list_head name_link
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct assoc_array keys
         8 |     struct assoc_array_ptr * root
        12 |     unsigned long nr_leaves_on_tree
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union key::(anonymous at ../include/linux/key.h:260:2)
         0 |   union key_payload payload
         0 |     void * rcu_data0
         0 |     void *[4] data
         0 |   struct key::(anonymous at ../include/linux/key.h:262:3) 
         0 |     struct list_head name_link
         0 |       struct list_head * next
         4 |       struct list_head * prev
         8 |     struct assoc_array keys
         8 |       struct assoc_array_ptr * root
        12 |       unsigned long nr_leaves_on_tree
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct key
         0 |   struct refcount_struct usage
         0 |     atomic_t refs
         0 |       int counter
         4 |   key_serial_t serial
         8 |   union key::(anonymous at ../include/linux/key.h:198:2) 
         8 |     struct list_head graveyard_link
         8 |       struct list_head * next
        12 |       struct list_head * prev
         8 |     struct rb_node serial_node
         8 |       unsigned long __rb_parent_color
        12 |       struct rb_node * rb_right
        16 |       struct rb_node * rb_left
        20 |   struct rw_semaphore sem
        20 |     atomic_t count
        20 |       int counter
        24 |     atomic_t owner
        24 |       int counter
        28 |     struct raw_spinlock wait_lock
        28 |       arch_spinlock_t raw_lock
        28 |         volatile unsigned int slock
        32 |       unsigned int magic
        36 |       unsigned int owner_cpu
        40 |       void * owner
        44 |       struct lockdep_map dep_map
        44 |         struct lock_class_key * key
        48 |         struct lock_class *[2] class_cache
        56 |         const char * name
        60 |         u8 wait_type_outer
        61 |         u8 wait_type_inner
        62 |         u8 lock_type
        64 |         int cpu
        68 |         unsigned long ip
        72 |     struct list_head wait_list
        72 |       struct list_head * next
        76 |       struct list_head * prev
        80 |     void * magic
        84 |     struct lockdep_map dep_map
        84 |       struct lock_class_key * key
        88 |       struct lock_class *[2] class_cache
        96 |       const char * name
       100 |       u8 wait_type_outer
       101 |       u8 wait_type_inner
       102 |       u8 lock_type
       104 |       int cpu
       108 |       unsigned long ip
       112 |   struct key_user * user
       116 |   void * security
       120 |   union key::(anonymous at ../include/linux/key.h:208:2) 
       120 |     time64_t expiry
       120 |     time64_t revoked_at
       128 |   time64_t last_used_at
       136 |   kuid_t uid
       136 |     uid_t val
       140 |   kgid_t gid
       140 |     gid_t val
       144 |   key_perm_t perm
       148 |   unsigned short quotalen
       150 |   unsigned short datalen
       152 |   short state
       156 |   unsigned long flags
       160 |   union key::(anonymous at ../include/linux/key.h:245:2) 
       160 |     struct keyring_index_key index_key
       160 |       unsigned long hash
       164 |       union keyring_index_key::(anonymous at ../include/linux/key.h:117:2) 
       164 |         struct keyring_index_key::(anonymous at ../include/linux/key.h:118:3) 
       164 |           u16 desc_len
       166 |           char[2] desc
       164 |         unsigned long x
       168 |       struct key_type * type
       172 |       struct key_tag * domain_tag
       176 |       const char * description
       160 |     struct key::(anonymous at ../include/linux/key.h:247:3) 
       160 |       unsigned long hash
       164 |       unsigned long len_desc
       168 |       struct key_type * type
       172 |       struct key_tag * domain_tag
       176 |       char * description
       180 |   union key::(anonymous at ../include/linux/key.h:260:2) 
       180 |     union key_payload payload
       180 |       void * rcu_data0
       180 |       void *[4] data
       180 |     struct key::(anonymous at ../include/linux/key.h:262:3) 
       180 |       struct list_head name_link
       180 |         struct list_head * next
       184 |         struct list_head * prev
       188 |       struct assoc_array keys
       188 |         struct assoc_array_ptr * root
       192 |         unsigned long nr_leaves_on_tree
       196 |   struct key_restriction * restrict_link
           | [sizeof=200, align=8]

*** Dumping AST Record Layout
         0 | struct key_tag
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
         8 |   struct refcount_struct usage
         8 |     atomic_t refs
         8 |       int counter
        12 |   bool removed
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union cred::(anonymous at ../include/linux/cred.h:143:2)
         0 |   int non_rcu
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct cred
         0 |   atomic_t usage
         0 |     int counter
         4 |   kuid_t uid
         4 |     uid_t val
         8 |   kgid_t gid
         8 |     gid_t val
        12 |   kuid_t suid
        12 |     uid_t val
        16 |   kgid_t sgid
        16 |     gid_t val
        20 |   kuid_t euid
        20 |     uid_t val
        24 |   kgid_t egid
        24 |     gid_t val
        28 |   kuid_t fsuid
        28 |     uid_t val
        32 |   kgid_t fsgid
        32 |     gid_t val
        36 |   unsigned int securebits
        40 |   kernel_cap_t cap_inheritable
        40 |     u64 val
        48 |   kernel_cap_t cap_permitted
        48 |     u64 val
        56 |   kernel_cap_t cap_effective
        56 |     u64 val
        64 |   kernel_cap_t cap_bset
        64 |     u64 val
        72 |   kernel_cap_t cap_ambient
        72 |     u64 val
        80 |   unsigned char jit_keyring
        84 |   struct key * session_keyring
        88 |   struct key * process_keyring
        92 |   struct key * thread_keyring
        96 |   struct key * request_key_auth
       100 |   struct user_struct * user
       104 |   struct user_namespace * user_ns
       108 |   struct ucounts * ucounts
       112 |   struct group_info * group_info
       116 |   union cred::(anonymous at ../include/linux/cred.h:143:2) 
       116 |     int non_rcu
       116 |     struct callback_head rcu
       116 |       struct callback_head * next
       120 |       void (*)(struct callback_head *) func
           | [sizeof=128, align=8]

*** Dumping AST Record Layout
         0 | struct cpu_timer
         0 |   struct timerqueue_node node
         0 |     struct rb_node node
         0 |       unsigned long __rb_parent_color
         4 |       struct rb_node * rb_right
         8 |       struct rb_node * rb_left
        16 |     ktime_t expires
        24 |   struct timerqueue_head * head
        28 |   struct pid * pid
        32 |   struct list_head elist
        32 |     struct list_head * next
        36 |     struct list_head * prev
        40 |   int firing
        44 |   struct task_struct * handling
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct alarm
         0 |   struct timerqueue_node node
         0 |     struct rb_node node
         0 |       unsigned long __rb_parent_color
         4 |       struct rb_node * rb_right
         8 |       struct rb_node * rb_left
        16 |     ktime_t expires
        24 |   struct hrtimer timer
        24 |     struct timerqueue_node node
        24 |       struct rb_node node
        24 |         unsigned long __rb_parent_color
        28 |         struct rb_node * rb_right
        32 |         struct rb_node * rb_left
        40 |       ktime_t expires
        48 |     ktime_t _softexpires
        56 |     enum hrtimer_restart (*)(struct hrtimer *) function
        60 |     struct hrtimer_clock_base * base
        64 |     u8 state
        65 |     u8 is_rel
        66 |     u8 is_soft
        67 |     u8 is_hard
        72 |   enum alarmtimer_restart (*)(struct alarm *, ktime_t) function
        76 |   enum alarmtimer_type type
        80 |   int state
        84 |   void * data
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct k_itimer::(unnamed at ../include/linux/posix-timers.h:180:3)
         0 |   struct hrtimer timer
         0 |     struct timerqueue_node node
         0 |       struct rb_node node
         0 |         unsigned long __rb_parent_color
         4 |         struct rb_node * rb_right
         8 |         struct rb_node * rb_left
        16 |       ktime_t expires
        24 |     ktime_t _softexpires
        32 |     enum hrtimer_restart (*)(struct hrtimer *) function
        36 |     struct hrtimer_clock_base * base
        40 |     u8 state
        41 |     u8 is_rel
        42 |     u8 is_soft
        43 |     u8 is_hard
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct task_cputime_atomic
         0 |   atomic64_t utime
         0 |     s64 counter
         8 |   atomic64_t stime
         8 |     s64 counter
        16 |   atomic64_t sum_exec_runtime
        16 |     s64 counter
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct cpu_itimer
         0 |   u64 expires
         8 |   u64 incr
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct rlimit
         0 |   __kernel_ulong_t rlim_cur
         4 |   __kernel_ulong_t rlim_max
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct pid
         0 |   struct refcount_struct count
         0 |     atomic_t refs
         0 |       int counter
         4 |   unsigned int level
         8 |   struct spinlock lock
         8 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |       struct raw_spinlock rlock
         8 |         arch_spinlock_t raw_lock
         8 |           volatile unsigned int slock
        12 |         unsigned int magic
        16 |         unsigned int owner_cpu
        20 |         void * owner
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
         8 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |         u8[16] __padding
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
        52 |   struct dentry * stashed
        56 |   u64 ino
        64 |   struct hlist_head[4] tasks
        80 |   struct hlist_head inodes
        80 |     struct hlist_node * first
        84 |   struct wait_queue_head wait_pidfd
        84 |     struct spinlock lock
        84 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        84 |         struct raw_spinlock rlock
        84 |           arch_spinlock_t raw_lock
        84 |             volatile unsigned int slock
        88 |           unsigned int magic
        92 |           unsigned int owner_cpu
        96 |           void * owner
       100 |           struct lockdep_map dep_map
       100 |             struct lock_class_key * key
       104 |             struct lock_class *[2] class_cache
       112 |             const char * name
       116 |             u8 wait_type_outer
       117 |             u8 wait_type_inner
       118 |             u8 lock_type
       120 |             int cpu
       124 |             unsigned long ip
        84 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        84 |           u8[16] __padding
       100 |           struct lockdep_map dep_map
       100 |             struct lock_class_key * key
       104 |             struct lock_class *[2] class_cache
       112 |             const char * name
       116 |             u8 wait_type_outer
       117 |             u8 wait_type_inner
       118 |             u8 lock_type
       120 |             int cpu
       124 |             unsigned long ip
       128 |     struct list_head head
       128 |       struct list_head * next
       132 |       struct list_head * prev
       136 |   struct callback_head rcu
       136 |     struct callback_head * next
       140 |     void (*)(struct callback_head *) func
       144 |   struct upid[] numbers
           | [sizeof=144, align=8]

*** Dumping AST Record Layout
         0 | struct rcu_sync
         0 |   int gp_state
         4 |   int gp_count
         8 |   struct wait_queue_head gp_wait
         8 |     struct spinlock lock
         8 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |         struct raw_spinlock rlock
         8 |           arch_spinlock_t raw_lock
         8 |             volatile unsigned int slock
        12 |           unsigned int magic
        16 |           unsigned int owner_cpu
        20 |           void * owner
        24 |           struct lockdep_map dep_map
        24 |             struct lock_class_key * key
        28 |             struct lock_class *[2] class_cache
        36 |             const char * name
        40 |             u8 wait_type_outer
        41 |             u8 wait_type_inner
        42 |             u8 lock_type
        44 |             int cpu
        48 |             unsigned long ip
         8 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |           u8[16] __padding
        24 |           struct lockdep_map dep_map
        24 |             struct lock_class_key * key
        28 |             struct lock_class *[2] class_cache
        36 |             const char * name
        40 |             u8 wait_type_outer
        41 |             u8 wait_type_inner
        42 |             u8 lock_type
        44 |             int cpu
        48 |             unsigned long ip
        52 |     struct list_head head
        52 |       struct list_head * next
        56 |       struct list_head * prev
        60 |   struct callback_head cb_head
        60 |     struct callback_head * next
        64 |     void (*)(struct callback_head *) func
           | [sizeof=68, align=4]

*** Dumping AST Record Layout
         0 | struct rcuwait
         0 |   struct task_struct * task
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct percpu_rw_semaphore
         0 |   struct rcu_sync rss
         0 |     int gp_state
         4 |     int gp_count
         8 |     struct wait_queue_head gp_wait
         8 |       struct spinlock lock
         8 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |           struct raw_spinlock rlock
         8 |             arch_spinlock_t raw_lock
         8 |               volatile unsigned int slock
        12 |             unsigned int magic
        16 |             unsigned int owner_cpu
        20 |             void * owner
        24 |             struct lockdep_map dep_map
        24 |               struct lock_class_key * key
        28 |               struct lock_class *[2] class_cache
        36 |               const char * name
        40 |               u8 wait_type_outer
        41 |               u8 wait_type_inner
        42 |               u8 lock_type
        44 |               int cpu
        48 |               unsigned long ip
         8 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |             u8[16] __padding
        24 |             struct lockdep_map dep_map
        24 |               struct lock_class_key * key
        28 |               struct lock_class *[2] class_cache
        36 |               const char * name
        40 |               u8 wait_type_outer
        41 |               u8 wait_type_inner
        42 |               u8 lock_type
        44 |               int cpu
        48 |               unsigned long ip
        52 |       struct list_head head
        52 |         struct list_head * next
        56 |         struct list_head * prev
        60 |     struct callback_head cb_head
        60 |       struct callback_head * next
        64 |       void (*)(struct callback_head *) func
        68 |   unsigned int * read_count
        72 |   struct rcuwait writer
        72 |     struct task_struct * task
        76 |   struct wait_queue_head waiters
        76 |     struct spinlock lock
        76 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        76 |         struct raw_spinlock rlock
        76 |           arch_spinlock_t raw_lock
        76 |             volatile unsigned int slock
        80 |           unsigned int magic
        84 |           unsigned int owner_cpu
        88 |           void * owner
        92 |           struct lockdep_map dep_map
        92 |             struct lock_class_key * key
        96 |             struct lock_class *[2] class_cache
       104 |             const char * name
       108 |             u8 wait_type_outer
       109 |             u8 wait_type_inner
       110 |             u8 lock_type
       112 |             int cpu
       116 |             unsigned long ip
        76 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        76 |           u8[16] __padding
        92 |           struct lockdep_map dep_map
        92 |             struct lock_class_key * key
        96 |             struct lock_class *[2] class_cache
       104 |             const char * name
       108 |             u8 wait_type_outer
       109 |             u8 wait_type_inner
       110 |             u8 lock_type
       112 |             int cpu
       116 |             unsigned long ip
       120 |     struct list_head head
       120 |       struct list_head * next
       124 |       struct list_head * prev
       128 |   atomic_t block
       128 |     int counter
       132 |   struct lockdep_map dep_map
       132 |     struct lock_class_key * key
       136 |     struct lock_class *[2] class_cache
       144 |     const char * name
       148 |     u8 wait_type_outer
       149 |     u8 wait_type_inner
       150 |     u8 lock_type
       152 |     int cpu
       156 |     unsigned long ip
           | [sizeof=160, align=4]

*** Dumping AST Record Layout
         0 | guid_t
         0 |   __u8[16] b
           | [sizeof=16, align=1]

*** Dumping AST Record Layout
         0 | uuid_t
         0 |   __u8[16] b
           | [sizeof=16, align=1]

*** Dumping AST Record Layout
         0 | vfsuid_t
         0 |   uid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | vfsgid_t
         0 |   gid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct percpu_ref
         0 |   unsigned long percpu_count_ptr
         4 |   struct percpu_ref_data * data
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct file_dedupe_range_info
         0 |   __s64 dest_fd
         8 |   __u64 dest_offset
        16 |   __u64 bytes_deduped
        24 |   __s32 status
        28 |   __u32 reserved
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | kprojid_t
         0 |   projid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union kqid::(anonymous at ../include/linux/quota.h:69:2)
         0 |   kuid_t uid
         0 |     uid_t val
         0 |   kgid_t gid
         0 |     gid_t val
         0 |   kprojid_t projid
         0 |     projid_t val
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct kqid
         0 |   union kqid::(anonymous at ../include/linux/quota.h:69:2) 
         0 |     kuid_t uid
         0 |       uid_t val
         0 |     kgid_t gid
         0 |       gid_t val
         0 |     kprojid_t projid
         0 |       projid_t val
         4 |   enum quota_type type
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct dqstats
         0 |   unsigned long[8] stat
        32 |   struct percpu_counter[8] counter
           | [sizeof=96, align=8]

*** Dumping AST Record Layout
         0 | struct qc_type_state
         0 |   unsigned int flags
         4 |   unsigned int spc_timelimit
         8 |   unsigned int ino_timelimit
        12 |   unsigned int rt_spc_timelimit
        16 |   unsigned int spc_warnlimit
        20 |   unsigned int ino_warnlimit
        24 |   unsigned int rt_spc_warnlimit
        32 |   unsigned long long ino
        40 |   blkcnt_t blocks
        48 |   blkcnt_t nextents
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct mem_dqinfo
         0 |   struct quota_format_type * dqi_format
         4 |   int dqi_fmt_id
         8 |   struct list_head dqi_dirty_list
         8 |     struct list_head * next
        12 |     struct list_head * prev
        16 |   unsigned long dqi_flags
        20 |   unsigned int dqi_bgrace
        24 |   unsigned int dqi_igrace
        32 |   qsize_t dqi_max_spc_limit
        40 |   qsize_t dqi_max_ino_limit
        48 |   void * dqi_priv
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | union file::(anonymous at ../include/linux/fs.h:996:2)
         0 |   struct callback_head f_task_work
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
         0 |   struct llist_node f_llist
         0 |     struct llist_node * next
         0 |   unsigned int f_iocb_flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct fown_struct
         0 |   rwlock_t lock
         0 |     arch_rwlock_t raw_lock
         0 |     unsigned int magic
         4 |     unsigned int owner_cpu
         8 |     void * owner
        12 |     struct lockdep_map dep_map
        12 |       struct lock_class_key * key
        16 |       struct lock_class *[2] class_cache
        24 |       const char * name
        28 |       u8 wait_type_outer
        29 |       u8 wait_type_inner
        30 |       u8 lock_type
        32 |       int cpu
        36 |       unsigned long ip
        40 |   struct pid * pid
        44 |   enum pid_type pid_type
        48 |   kuid_t uid
        48 |     uid_t val
        52 |   kuid_t euid
        52 |     uid_t val
        56 |   int signum
           | [sizeof=60, align=4]

*** Dumping AST Record Layout
         0 | struct file_ra_state
         0 |   unsigned long start
         4 |   unsigned int size
         8 |   unsigned int async_size
        12 |   unsigned int ra_pages
        16 |   unsigned int mmap_miss
        24 |   loff_t prev_pos
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct file
         0 |   union file::(anonymous at ../include/linux/fs.h:996:2) 
         0 |     struct callback_head f_task_work
         0 |       struct callback_head * next
         4 |       void (*)(struct callback_head *) func
         0 |     struct llist_node f_llist
         0 |       struct llist_node * next
         0 |     unsigned int f_iocb_flags
         8 |   struct spinlock f_lock
         8 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |       struct raw_spinlock rlock
         8 |         arch_spinlock_t raw_lock
         8 |           volatile unsigned int slock
        12 |         unsigned int magic
        16 |         unsigned int owner_cpu
        20 |         void * owner
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
         8 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |         u8[16] __padding
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
        52 |   fmode_t f_mode
        56 |   atomic_t f_count
        56 |     int counter
        60 |   struct mutex f_pos_lock
        60 |     atomic_t owner
        60 |       int counter
        64 |     struct raw_spinlock wait_lock
        64 |       arch_spinlock_t raw_lock
        64 |         volatile unsigned int slock
        68 |       unsigned int magic
        72 |       unsigned int owner_cpu
        76 |       void * owner
        80 |       struct lockdep_map dep_map
        80 |         struct lock_class_key * key
        84 |         struct lock_class *[2] class_cache
        92 |         const char * name
        96 |         u8 wait_type_outer
        97 |         u8 wait_type_inner
        98 |         u8 lock_type
       100 |         int cpu
       104 |         unsigned long ip
       108 |     struct list_head wait_list
       108 |       struct list_head * next
       112 |       struct list_head * prev
       116 |     void * magic
       120 |     struct lockdep_map dep_map
       120 |       struct lock_class_key * key
       124 |       struct lock_class *[2] class_cache
       132 |       const char * name
       136 |       u8 wait_type_outer
       137 |       u8 wait_type_inner
       138 |       u8 lock_type
       140 |       int cpu
       144 |       unsigned long ip
       152 |   loff_t f_pos
       160 |   unsigned int f_flags
       164 |   struct fown_struct f_owner
       164 |     rwlock_t lock
       164 |       arch_rwlock_t raw_lock
       164 |       unsigned int magic
       168 |       unsigned int owner_cpu
       172 |       void * owner
       176 |       struct lockdep_map dep_map
       176 |         struct lock_class_key * key
       180 |         struct lock_class *[2] class_cache
       188 |         const char * name
       192 |         u8 wait_type_outer
       193 |         u8 wait_type_inner
       194 |         u8 lock_type
       196 |         int cpu
       200 |         unsigned long ip
       204 |     struct pid * pid
       208 |     enum pid_type pid_type
       212 |     kuid_t uid
       212 |       uid_t val
       216 |     kuid_t euid
       216 |       uid_t val
       220 |     int signum
       224 |   const struct cred * f_cred
       232 |   struct file_ra_state f_ra
       232 |     unsigned long start
       236 |     unsigned int size
       240 |     unsigned int async_size
       244 |     unsigned int ra_pages
       248 |     unsigned int mmap_miss
       256 |     loff_t prev_pos
       264 |   struct path f_path
       264 |     struct vfsmount * mnt
       268 |     struct dentry * dentry
       272 |   struct inode * f_inode
       276 |   const struct file_operations * f_op
       280 |   u64 f_version
       288 |   void * private_data
       292 |   struct hlist_head * f_ep
       296 |   struct address_space * f_mapping
       300 |   errseq_t f_wb_err
       304 |   errseq_t f_sb_err
           | [sizeof=312, align=8]

*** Dumping AST Record Layout
         0 | union inode::(anonymous at ../include/linux/fs.h:661:2)
         0 |   const unsigned int i_nlink
         0 |   unsigned int __i_nlink
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union inode::(anonymous at ../include/linux/fs.h:704:2)
         0 |   struct hlist_head i_dentry
         0 |     struct hlist_node * first
         0 |   struct callback_head i_rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union inode::(anonymous at ../include/linux/fs.h:716:2)
         0 |   const struct file_operations * i_fop
         0 |   void (*)(struct inode *) free_inode
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct address_space
         0 |   struct inode * host
         4 |   struct xarray i_pages
         4 |     struct spinlock xa_lock
         4 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         4 |         struct raw_spinlock rlock
         4 |           arch_spinlock_t raw_lock
         4 |             volatile unsigned int slock
         8 |           unsigned int magic
        12 |           unsigned int owner_cpu
        16 |           void * owner
        20 |           struct lockdep_map dep_map
        20 |             struct lock_class_key * key
        24 |             struct lock_class *[2] class_cache
        32 |             const char * name
        36 |             u8 wait_type_outer
        37 |             u8 wait_type_inner
        38 |             u8 lock_type
        40 |             int cpu
        44 |             unsigned long ip
         4 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         4 |           u8[16] __padding
        20 |           struct lockdep_map dep_map
        20 |             struct lock_class_key * key
        24 |             struct lock_class *[2] class_cache
        32 |             const char * name
        36 |             u8 wait_type_outer
        37 |             u8 wait_type_inner
        38 |             u8 lock_type
        40 |             int cpu
        44 |             unsigned long ip
        48 |     gfp_t xa_flags
        52 |     void * xa_head
        56 |   struct rw_semaphore invalidate_lock
        56 |     atomic_t count
        56 |       int counter
        60 |     atomic_t owner
        60 |       int counter
        64 |     struct raw_spinlock wait_lock
        64 |       arch_spinlock_t raw_lock
        64 |         volatile unsigned int slock
        68 |       unsigned int magic
        72 |       unsigned int owner_cpu
        76 |       void * owner
        80 |       struct lockdep_map dep_map
        80 |         struct lock_class_key * key
        84 |         struct lock_class *[2] class_cache
        92 |         const char * name
        96 |         u8 wait_type_outer
        97 |         u8 wait_type_inner
        98 |         u8 lock_type
       100 |         int cpu
       104 |         unsigned long ip
       108 |     struct list_head wait_list
       108 |       struct list_head * next
       112 |       struct list_head * prev
       116 |     void * magic
       120 |     struct lockdep_map dep_map
       120 |       struct lock_class_key * key
       124 |       struct lock_class *[2] class_cache
       132 |       const char * name
       136 |       u8 wait_type_outer
       137 |       u8 wait_type_inner
       138 |       u8 lock_type
       140 |       int cpu
       144 |       unsigned long ip
       148 |   gfp_t gfp_mask
       152 |   atomic_t i_mmap_writable
       152 |     int counter
       156 |   struct rb_root_cached i_mmap
       156 |     struct rb_root rb_root
       156 |       struct rb_node * rb_node
       160 |     struct rb_node * rb_leftmost
       164 |   unsigned long nrpages
       168 |   unsigned long writeback_index
       172 |   const struct address_space_operations * a_ops
       176 |   unsigned long flags
       180 |   errseq_t wb_err
       184 |   struct spinlock i_private_lock
       184 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       184 |       struct raw_spinlock rlock
       184 |         arch_spinlock_t raw_lock
       184 |           volatile unsigned int slock
       188 |         unsigned int magic
       192 |         unsigned int owner_cpu
       196 |         void * owner
       200 |         struct lockdep_map dep_map
       200 |           struct lock_class_key * key
       204 |           struct lock_class *[2] class_cache
       212 |           const char * name
       216 |           u8 wait_type_outer
       217 |           u8 wait_type_inner
       218 |           u8 lock_type
       220 |           int cpu
       224 |           unsigned long ip
       184 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       184 |         u8[16] __padding
       200 |         struct lockdep_map dep_map
       200 |           struct lock_class_key * key
       204 |           struct lock_class *[2] class_cache
       212 |           const char * name
       216 |           u8 wait_type_outer
       217 |           u8 wait_type_inner
       218 |           u8 lock_type
       220 |           int cpu
       224 |           unsigned long ip
       228 |   struct list_head i_private_list
       228 |     struct list_head * next
       232 |     struct list_head * prev
       236 |   struct rw_semaphore i_mmap_rwsem
       236 |     atomic_t count
       236 |       int counter
       240 |     atomic_t owner
       240 |       int counter
       244 |     struct raw_spinlock wait_lock
       244 |       arch_spinlock_t raw_lock
       244 |         volatile unsigned int slock
       248 |       unsigned int magic
       252 |       unsigned int owner_cpu
       256 |       void * owner
       260 |       struct lockdep_map dep_map
       260 |         struct lock_class_key * key
       264 |         struct lock_class *[2] class_cache
       272 |         const char * name
       276 |         u8 wait_type_outer
       277 |         u8 wait_type_inner
       278 |         u8 lock_type
       280 |         int cpu
       284 |         unsigned long ip
       288 |     struct list_head wait_list
       288 |       struct list_head * next
       292 |       struct list_head * prev
       296 |     void * magic
       300 |     struct lockdep_map dep_map
       300 |       struct lock_class_key * key
       304 |       struct lock_class *[2] class_cache
       312 |       const char * name
       316 |       u8 wait_type_outer
       317 |       u8 wait_type_inner
       318 |       u8 lock_type
       320 |       int cpu
       324 |       unsigned long ip
       328 |   void * i_private_data
           | [sizeof=332, align=4]

*** Dumping AST Record Layout
         0 | union inode::(anonymous at ../include/linux/fs.h:723:2)
         0 |   struct pipe_inode_info * i_pipe
         0 |   struct cdev * i_cdev
         0 |   char * i_link
         0 |   unsigned int i_dir_seq
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct inode
         0 |   umode_t i_mode
         2 |   unsigned short i_opflags
         4 |   kuid_t i_uid
         4 |     uid_t val
         8 |   kgid_t i_gid
         8 |     gid_t val
        12 |   unsigned int i_flags
        16 |   struct posix_acl * i_acl
        20 |   struct posix_acl * i_default_acl
        24 |   const struct inode_operations * i_op
        28 |   struct super_block * i_sb
        32 |   struct address_space * i_mapping
        36 |   unsigned long i_ino
        40 |   union inode::(anonymous at ../include/linux/fs.h:661:2) 
        40 |     const unsigned int i_nlink
        40 |     unsigned int __i_nlink
        44 |   dev_t i_rdev
        48 |   loff_t i_size
        56 |   time64_t i_atime_sec
        64 |   time64_t i_mtime_sec
        72 |   time64_t i_ctime_sec
        80 |   u32 i_atime_nsec
        84 |   u32 i_mtime_nsec
        88 |   u32 i_ctime_nsec
        92 |   u32 i_generation
        96 |   struct spinlock i_lock
        96 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        96 |       struct raw_spinlock rlock
        96 |         arch_spinlock_t raw_lock
        96 |           volatile unsigned int slock
       100 |         unsigned int magic
       104 |         unsigned int owner_cpu
       108 |         void * owner
       112 |         struct lockdep_map dep_map
       112 |           struct lock_class_key * key
       116 |           struct lock_class *[2] class_cache
       124 |           const char * name
       128 |           u8 wait_type_outer
       129 |           u8 wait_type_inner
       130 |           u8 lock_type
       132 |           int cpu
       136 |           unsigned long ip
        96 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        96 |         u8[16] __padding
       112 |         struct lockdep_map dep_map
       112 |           struct lock_class_key * key
       116 |           struct lock_class *[2] class_cache
       124 |           const char * name
       128 |           u8 wait_type_outer
       129 |           u8 wait_type_inner
       130 |           u8 lock_type
       132 |           int cpu
       136 |           unsigned long ip
       140 |   unsigned short i_bytes
       142 |   u8 i_blkbits
       143 |   enum rw_hint i_write_hint
       144 |   blkcnt_t i_blocks
       152 |   unsigned long i_state
       156 |   struct rw_semaphore i_rwsem
       156 |     atomic_t count
       156 |       int counter
       160 |     atomic_t owner
       160 |       int counter
       164 |     struct raw_spinlock wait_lock
       164 |       arch_spinlock_t raw_lock
       164 |         volatile unsigned int slock
       168 |       unsigned int magic
       172 |       unsigned int owner_cpu
       176 |       void * owner
       180 |       struct lockdep_map dep_map
       180 |         struct lock_class_key * key
       184 |         struct lock_class *[2] class_cache
       192 |         const char * name
       196 |         u8 wait_type_outer
       197 |         u8 wait_type_inner
       198 |         u8 lock_type
       200 |         int cpu
       204 |         unsigned long ip
       208 |     struct list_head wait_list
       208 |       struct list_head * next
       212 |       struct list_head * prev
       216 |     void * magic
       220 |     struct lockdep_map dep_map
       220 |       struct lock_class_key * key
       224 |       struct lock_class *[2] class_cache
       232 |       const char * name
       236 |       u8 wait_type_outer
       237 |       u8 wait_type_inner
       238 |       u8 lock_type
       240 |       int cpu
       244 |       unsigned long ip
       248 |   unsigned long dirtied_when
       252 |   unsigned long dirtied_time_when
       256 |   struct hlist_node i_hash
       256 |     struct hlist_node * next
       260 |     struct hlist_node ** pprev
       264 |   struct list_head i_io_list
       264 |     struct list_head * next
       268 |     struct list_head * prev
       272 |   struct list_head i_lru
       272 |     struct list_head * next
       276 |     struct list_head * prev
       280 |   struct list_head i_sb_list
       280 |     struct list_head * next
       284 |     struct list_head * prev
       288 |   struct list_head i_wb_list
       288 |     struct list_head * next
       292 |     struct list_head * prev
       296 |   union inode::(anonymous at ../include/linux/fs.h:704:2) 
       296 |     struct hlist_head i_dentry
       296 |       struct hlist_node * first
       296 |     struct callback_head i_rcu
       296 |       struct callback_head * next
       300 |       void (*)(struct callback_head *) func
       304 |   atomic64_t i_version
       304 |     s64 counter
       312 |   atomic64_t i_sequence
       312 |     s64 counter
       320 |   atomic_t i_count
       320 |     int counter
       324 |   atomic_t i_dio_count
       324 |     int counter
       328 |   atomic_t i_writecount
       328 |     int counter
       332 |   atomic_t i_readcount
       332 |     int counter
       336 |   union inode::(anonymous at ../include/linux/fs.h:716:2) 
       336 |     const struct file_operations * i_fop
       336 |     void (*)(struct inode *) free_inode
       340 |   struct file_lock_context * i_flctx
       344 |   struct address_space i_data
       344 |     struct inode * host
       348 |     struct xarray i_pages
       348 |       struct spinlock xa_lock
       348 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       348 |           struct raw_spinlock rlock
       348 |             arch_spinlock_t raw_lock
       348 |               volatile unsigned int slock
       352 |             unsigned int magic
       356 |             unsigned int owner_cpu
       360 |             void * owner
       364 |             struct lockdep_map dep_map
       364 |               struct lock_class_key * key
       368 |               struct lock_class *[2] class_cache
       376 |               const char * name
       380 |               u8 wait_type_outer
       381 |               u8 wait_type_inner
       382 |               u8 lock_type
       384 |               int cpu
       388 |               unsigned long ip
       348 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       348 |             u8[16] __padding
       364 |             struct lockdep_map dep_map
       364 |               struct lock_class_key * key
       368 |               struct lock_class *[2] class_cache
       376 |               const char * name
       380 |               u8 wait_type_outer
       381 |               u8 wait_type_inner
       382 |               u8 lock_type
       384 |               int cpu
       388 |               unsigned long ip
       392 |       gfp_t xa_flags
       396 |       void * xa_head
       400 |     struct rw_semaphore invalidate_lock
       400 |       atomic_t count
       400 |         int counter
       404 |       atomic_t owner
       404 |         int counter
       408 |       struct raw_spinlock wait_lock
       408 |         arch_spinlock_t raw_lock
       408 |           volatile unsigned int slock
       412 |         unsigned int magic
       416 |         unsigned int owner_cpu
       420 |         void * owner
       424 |         struct lockdep_map dep_map
       424 |           struct lock_class_key * key
       428 |           struct lock_class *[2] class_cache
       436 |           const char * name
       440 |           u8 wait_type_outer
       441 |           u8 wait_type_inner
       442 |           u8 lock_type
       444 |           int cpu
       448 |           unsigned long ip
       452 |       struct list_head wait_list
       452 |         struct list_head * next
       456 |         struct list_head * prev
       460 |       void * magic
       464 |       struct lockdep_map dep_map
       464 |         struct lock_class_key * key
       468 |         struct lock_class *[2] class_cache
       476 |         const char * name
       480 |         u8 wait_type_outer
       481 |         u8 wait_type_inner
       482 |         u8 lock_type
       484 |         int cpu
       488 |         unsigned long ip
       492 |     gfp_t gfp_mask
       496 |     atomic_t i_mmap_writable
       496 |       int counter
       500 |     struct rb_root_cached i_mmap
       500 |       struct rb_root rb_root
       500 |         struct rb_node * rb_node
       504 |       struct rb_node * rb_leftmost
       508 |     unsigned long nrpages
       512 |     unsigned long writeback_index
       516 |     const struct address_space_operations * a_ops
       520 |     unsigned long flags
       524 |     errseq_t wb_err
       528 |     struct spinlock i_private_lock
       528 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       528 |         struct raw_spinlock rlock
       528 |           arch_spinlock_t raw_lock
       528 |             volatile unsigned int slock
       532 |           unsigned int magic
       536 |           unsigned int owner_cpu
       540 |           void * owner
       544 |           struct lockdep_map dep_map
       544 |             struct lock_class_key * key
       548 |             struct lock_class *[2] class_cache
       556 |             const char * name
       560 |             u8 wait_type_outer
       561 |             u8 wait_type_inner
       562 |             u8 lock_type
       564 |             int cpu
       568 |             unsigned long ip
       528 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       528 |           u8[16] __padding
       544 |           struct lockdep_map dep_map
       544 |             struct lock_class_key * key
       548 |             struct lock_class *[2] class_cache
       556 |             const char * name
       560 |             u8 wait_type_outer
       561 |             u8 wait_type_inner
       562 |             u8 lock_type
       564 |             int cpu
       568 |             unsigned long ip
       572 |     struct list_head i_private_list
       572 |       struct list_head * next
       576 |       struct list_head * prev
       580 |     struct rw_semaphore i_mmap_rwsem
       580 |       atomic_t count
       580 |         int counter
       584 |       atomic_t owner
       584 |         int counter
       588 |       struct raw_spinlock wait_lock
       588 |         arch_spinlock_t raw_lock
       588 |           volatile unsigned int slock
       592 |         unsigned int magic
       596 |         unsigned int owner_cpu
       600 |         void * owner
       604 |         struct lockdep_map dep_map
       604 |           struct lock_class_key * key
       608 |           struct lock_class *[2] class_cache
       616 |           const char * name
       620 |           u8 wait_type_outer
       621 |           u8 wait_type_inner
       622 |           u8 lock_type
       624 |           int cpu
       628 |           unsigned long ip
       632 |       struct list_head wait_list
       632 |         struct list_head * next
       636 |         struct list_head * prev
       640 |       void * magic
       644 |       struct lockdep_map dep_map
       644 |         struct lock_class_key * key
       648 |         struct lock_class *[2] class_cache
       656 |         const char * name
       660 |         u8 wait_type_outer
       661 |         u8 wait_type_inner
       662 |         u8 lock_type
       664 |         int cpu
       668 |         unsigned long ip
       672 |     void * i_private_data
       676 |   struct list_head i_devices
       676 |     struct list_head * next
       680 |     struct list_head * prev
       684 |   union inode::(anonymous at ../include/linux/fs.h:723:2) 
       684 |     struct pipe_inode_info * i_pipe
       684 |     struct cdev * i_cdev
       684 |     char * i_link
       684 |     unsigned int i_dir_seq
       688 |   __u32 i_fsnotify_mask
       692 |   struct fsnotify_mark_connector * i_fsnotify_marks
       696 |   struct fscrypt_inode_info * i_crypt_info
       700 |   void * i_private
           | [sizeof=704, align=8]

*** Dumping AST Record Layout
         0 | struct quota_info
         0 |   unsigned int flags
         4 |   struct rw_semaphore dqio_sem
         4 |     atomic_t count
         4 |       int counter
         8 |     atomic_t owner
         8 |       int counter
        12 |     struct raw_spinlock wait_lock
        12 |       arch_spinlock_t raw_lock
        12 |         volatile unsigned int slock
        16 |       unsigned int magic
        20 |       unsigned int owner_cpu
        24 |       void * owner
        28 |       struct lockdep_map dep_map
        28 |         struct lock_class_key * key
        32 |         struct lock_class *[2] class_cache
        40 |         const char * name
        44 |         u8 wait_type_outer
        45 |         u8 wait_type_inner
        46 |         u8 lock_type
        48 |         int cpu
        52 |         unsigned long ip
        56 |     struct list_head wait_list
        56 |       struct list_head * next
        60 |       struct list_head * prev
        64 |     void * magic
        68 |     struct lockdep_map dep_map
        68 |       struct lock_class_key * key
        72 |       struct lock_class *[2] class_cache
        80 |       const char * name
        84 |       u8 wait_type_outer
        85 |       u8 wait_type_inner
        86 |       u8 lock_type
        88 |       int cpu
        92 |       unsigned long ip
        96 |   struct inode *[3] files
       112 |   struct mem_dqinfo[3] info
       280 |   const struct quota_format_ops *[3] ops
           | [sizeof=296, align=8]

*** Dumping AST Record Layout
         0 | struct sb_writers
         0 |   unsigned short frozen
         4 |   int freeze_kcount
         8 |   int freeze_ucount
        12 |   struct percpu_rw_semaphore[3] rw_sem
           | [sizeof=492, align=4]

*** Dumping AST Record Layout
         0 | struct super_block
         0 |   struct list_head s_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   dev_t s_dev
        12 |   unsigned char s_blocksize_bits
        16 |   unsigned long s_blocksize
        24 |   loff_t s_maxbytes
        32 |   struct file_system_type * s_type
        36 |   const struct super_operations * s_op
        40 |   const struct dquot_operations * dq_op
        44 |   const struct quotactl_ops * s_qcop
        48 |   const struct export_operations * s_export_op
        52 |   unsigned long s_flags
        56 |   unsigned long s_iflags
        60 |   unsigned long s_magic
        64 |   struct dentry * s_root
        68 |   struct rw_semaphore s_umount
        68 |     atomic_t count
        68 |       int counter
        72 |     atomic_t owner
        72 |       int counter
        76 |     struct raw_spinlock wait_lock
        76 |       arch_spinlock_t raw_lock
        76 |         volatile unsigned int slock
        80 |       unsigned int magic
        84 |       unsigned int owner_cpu
        88 |       void * owner
        92 |       struct lockdep_map dep_map
        92 |         struct lock_class_key * key
        96 |         struct lock_class *[2] class_cache
       104 |         const char * name
       108 |         u8 wait_type_outer
       109 |         u8 wait_type_inner
       110 |         u8 lock_type
       112 |         int cpu
       116 |         unsigned long ip
       120 |     struct list_head wait_list
       120 |       struct list_head * next
       124 |       struct list_head * prev
       128 |     void * magic
       132 |     struct lockdep_map dep_map
       132 |       struct lock_class_key * key
       136 |       struct lock_class *[2] class_cache
       144 |       const char * name
       148 |       u8 wait_type_outer
       149 |       u8 wait_type_inner
       150 |       u8 lock_type
       152 |       int cpu
       156 |       unsigned long ip
       160 |   int s_count
       164 |   atomic_t s_active
       164 |     int counter
       168 |   const struct xattr_handler *const * s_xattr
       172 |   const struct fscrypt_operations * s_cop
       176 |   struct fscrypt_keyring * s_master_keys
       180 |   struct unicode_map * s_encoding
       184 |   __u16 s_encoding_flags
       188 |   struct hlist_bl_head s_roots
       188 |     struct hlist_bl_node * first
       192 |   struct list_head s_mounts
       192 |     struct list_head * next
       196 |     struct list_head * prev
       200 |   struct block_device * s_bdev
       204 |   struct file * s_bdev_file
       208 |   struct backing_dev_info * s_bdi
       212 |   struct mtd_info * s_mtd
       216 |   struct hlist_node s_instances
       216 |     struct hlist_node * next
       220 |     struct hlist_node ** pprev
       224 |   unsigned int s_quota_types
       232 |   struct quota_info s_dquot
       232 |     unsigned int flags
       236 |     struct rw_semaphore dqio_sem
       236 |       atomic_t count
       236 |         int counter
       240 |       atomic_t owner
       240 |         int counter
       244 |       struct raw_spinlock wait_lock
       244 |         arch_spinlock_t raw_lock
       244 |           volatile unsigned int slock
       248 |         unsigned int magic
       252 |         unsigned int owner_cpu
       256 |         void * owner
       260 |         struct lockdep_map dep_map
       260 |           struct lock_class_key * key
       264 |           struct lock_class *[2] class_cache
       272 |           const char * name
       276 |           u8 wait_type_outer
       277 |           u8 wait_type_inner
       278 |           u8 lock_type
       280 |           int cpu
       284 |           unsigned long ip
       288 |       struct list_head wait_list
       288 |         struct list_head * next
       292 |         struct list_head * prev
       296 |       void * magic
       300 |       struct lockdep_map dep_map
       300 |         struct lock_class_key * key
       304 |         struct lock_class *[2] class_cache
       312 |         const char * name
       316 |         u8 wait_type_outer
       317 |         u8 wait_type_inner
       318 |         u8 lock_type
       320 |         int cpu
       324 |         unsigned long ip
       328 |     struct inode *[3] files
       344 |     struct mem_dqinfo[3] info
       512 |     const struct quota_format_ops *[3] ops
       528 |   struct sb_writers s_writers
       528 |     unsigned short frozen
       532 |     int freeze_kcount
       536 |     int freeze_ucount
       540 |     struct percpu_rw_semaphore[3] rw_sem
      1020 |   void * s_fs_info
      1024 |   u32 s_time_gran
      1032 |   time64_t s_time_min
      1040 |   time64_t s_time_max
      1048 |   __u32 s_fsnotify_mask
      1052 |   struct fsnotify_sb_info * s_fsnotify_info
      1056 |   char[32] s_id
      1088 |   uuid_t s_uuid
      1088 |     __u8[16] b
      1104 |   u8 s_uuid_len
      1105 |   char[37] s_sysfs_name
      1144 |   unsigned int s_max_links
      1148 |   struct mutex s_vfs_rename_mutex
      1148 |     atomic_t owner
      1148 |       int counter
      1152 |     struct raw_spinlock wait_lock
      1152 |       arch_spinlock_t raw_lock
      1152 |         volatile unsigned int slock
      1156 |       unsigned int magic
      1160 |       unsigned int owner_cpu
      1164 |       void * owner
      1168 |       struct lockdep_map dep_map
      1168 |         struct lock_class_key * key
      1172 |         struct lock_class *[2] class_cache
      1180 |         const char * name
      1184 |         u8 wait_type_outer
      1185 |         u8 wait_type_inner
      1186 |         u8 lock_type
      1188 |         int cpu
      1192 |         unsigned long ip
      1196 |     struct list_head wait_list
      1196 |       struct list_head * next
      1200 |       struct list_head * prev
      1204 |     void * magic
      1208 |     struct lockdep_map dep_map
      1208 |       struct lock_class_key * key
      1212 |       struct lock_class *[2] class_cache
      1220 |       const char * name
      1224 |       u8 wait_type_outer
      1225 |       u8 wait_type_inner
      1226 |       u8 lock_type
      1228 |       int cpu
      1232 |       unsigned long ip
      1236 |   const char * s_subtype
      1240 |   const struct dentry_operations * s_d_op
      1244 |   struct shrinker * s_shrink
      1248 |   atomic_t s_remove_count
      1248 |     int counter
      1252 |   int s_readonly_remount
      1256 |   errseq_t s_wb_err
      1260 |   struct workqueue_struct * s_dio_done_wq
      1264 |   struct hlist_head s_pins
      1264 |     struct hlist_node * first
      1268 |   struct user_namespace * s_user_ns
      1272 |   struct list_lru s_dentry_lru
      1272 |     struct list_lru_node * node
      1276 |   struct list_lru s_inode_lru
      1276 |     struct list_lru_node * node
      1280 |   struct callback_head rcu
      1280 |     struct callback_head * next
      1284 |     void (*)(struct callback_head *) func
      1288 |   struct work_struct destroy_work
      1288 |     atomic_t data
      1288 |       int counter
      1292 |     struct list_head entry
      1292 |       struct list_head * next
      1296 |       struct list_head * prev
      1300 |     work_func_t func
      1304 |     struct lockdep_map lockdep_map
      1304 |       struct lock_class_key * key
      1308 |       struct lock_class *[2] class_cache
      1316 |       const char * name
      1320 |       u8 wait_type_outer
      1321 |       u8 wait_type_inner
      1322 |       u8 lock_type
      1324 |       int cpu
      1328 |       unsigned long ip
      1332 |   struct mutex s_sync_lock
      1332 |     atomic_t owner
      1332 |       int counter
      1336 |     struct raw_spinlock wait_lock
      1336 |       arch_spinlock_t raw_lock
      1336 |         volatile unsigned int slock
      1340 |       unsigned int magic
      1344 |       unsigned int owner_cpu
      1348 |       void * owner
      1352 |       struct lockdep_map dep_map
      1352 |         struct lock_class_key * key
      1356 |         struct lock_class *[2] class_cache
      1364 |         const char * name
      1368 |         u8 wait_type_outer
      1369 |         u8 wait_type_inner
      1370 |         u8 lock_type
      1372 |         int cpu
      1376 |         unsigned long ip
      1380 |     struct list_head wait_list
      1380 |       struct list_head * next
      1384 |       struct list_head * prev
      1388 |     void * magic
      1392 |     struct lockdep_map dep_map
      1392 |       struct lock_class_key * key
      1396 |       struct lock_class *[2] class_cache
      1404 |       const char * name
      1408 |       u8 wait_type_outer
      1409 |       u8 wait_type_inner
      1410 |       u8 lock_type
      1412 |       int cpu
      1416 |       unsigned long ip
      1420 |   int s_stack_depth
      1424 |   struct spinlock s_inode_list_lock
      1424 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1424 |       struct raw_spinlock rlock
      1424 |         arch_spinlock_t raw_lock
      1424 |           volatile unsigned int slock
      1428 |         unsigned int magic
      1432 |         unsigned int owner_cpu
      1436 |         void * owner
      1440 |         struct lockdep_map dep_map
      1440 |           struct lock_class_key * key
      1444 |           struct lock_class *[2] class_cache
      1452 |           const char * name
      1456 |           u8 wait_type_outer
      1457 |           u8 wait_type_inner
      1458 |           u8 lock_type
      1460 |           int cpu
      1464 |           unsigned long ip
      1424 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1424 |         u8[16] __padding
      1440 |         struct lockdep_map dep_map
      1440 |           struct lock_class_key * key
      1444 |           struct lock_class *[2] class_cache
      1452 |           const char * name
      1456 |           u8 wait_type_outer
      1457 |           u8 wait_type_inner
      1458 |           u8 lock_type
      1460 |           int cpu
      1464 |           unsigned long ip
      1468 |   struct list_head s_inodes
      1468 |     struct list_head * next
      1472 |     struct list_head * prev
      1476 |   struct spinlock s_inode_wblist_lock
      1476 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1476 |       struct raw_spinlock rlock
      1476 |         arch_spinlock_t raw_lock
      1476 |           volatile unsigned int slock
      1480 |         unsigned int magic
      1484 |         unsigned int owner_cpu
      1488 |         void * owner
      1492 |         struct lockdep_map dep_map
      1492 |           struct lock_class_key * key
      1496 |           struct lock_class *[2] class_cache
      1504 |           const char * name
      1508 |           u8 wait_type_outer
      1509 |           u8 wait_type_inner
      1510 |           u8 lock_type
      1512 |           int cpu
      1516 |           unsigned long ip
      1476 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1476 |         u8[16] __padding
      1492 |         struct lockdep_map dep_map
      1492 |           struct lock_class_key * key
      1496 |           struct lock_class *[2] class_cache
      1504 |           const char * name
      1508 |           u8 wait_type_outer
      1509 |           u8 wait_type_inner
      1510 |           u8 lock_type
      1512 |           int cpu
      1516 |           unsigned long ip
      1520 |   struct list_head s_inodes_wb
      1520 |     struct list_head * next
      1524 |     struct list_head * prev
           | [sizeof=1528, align=8]

*** Dumping AST Record Layout
         0 | pgprot_t
         0 |   unsigned long pgprot
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union vm_area_struct::(anonymous at ../include/linux/mm_types.h:685:2)
         0 |   const vm_flags_t vm_flags
         0 |   vm_flags_t __vm_flags
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct vm_area_struct::(unnamed at ../include/linux/mm_types.h:717:2)
         0 |   struct rb_node rb
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
        12 |   unsigned long rb_subtree_last
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct vm_userfaultfd_ctx
         0 |   struct userfaultfd_ctx * ctx
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct vm_area_struct
         0 |   union vm_area_struct::(anonymous at ../include/linux/mm_types.h:667:2) 
         0 |     struct vm_area_struct::(anonymous at ../include/linux/mm_types.h:668:3) 
         0 |       unsigned long vm_start
         4 |       unsigned long vm_end
         8 |   struct mm_struct * vm_mm
        12 |   pgprot_t vm_page_prot
        12 |     unsigned long pgprot
        16 |   union vm_area_struct::(anonymous at ../include/linux/mm_types.h:685:2) 
        16 |     const vm_flags_t vm_flags
        16 |     vm_flags_t __vm_flags
        20 |   struct vm_area_struct::(unnamed at ../include/linux/mm_types.h:717:2) shared
        20 |     struct rb_node rb
        20 |       unsigned long __rb_parent_color
        24 |       struct rb_node * rb_right
        28 |       struct rb_node * rb_left
        32 |     unsigned long rb_subtree_last
        36 |   struct list_head anon_vma_chain
        36 |     struct list_head * next
        40 |     struct list_head * prev
        44 |   struct anon_vma * anon_vma
        48 |   const struct vm_operations_struct * vm_ops
        52 |   unsigned long vm_pgoff
        56 |   struct file * vm_file
        60 |   void * vm_private_data
        64 |   struct vm_userfaultfd_ctx vm_userfaultfd_ctx
        64 |     struct userfaultfd_ctx * ctx
           | [sizeof=68, align=4]

*** Dumping AST Record Layout
         0 | struct __va_list_tag
         0 |   void * __current_saved_reg_area_pointer
         4 |   void * __saved_reg_area_end_pointer
         8 |   void * __overflow_area_pointer
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct filename
         0 |   const char * name
         4 |   const char * uptr
         8 |   atomic_t refcnt
         8 |     int counter
        12 |   struct audit_names * aname
        16 |   const char[] iname
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct vfsmount
         0 |   struct dentry * mnt_root
         4 |   struct super_block * mnt_sb
         8 |   int mnt_flags
        12 |   struct mnt_idmap * mnt_idmap
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct file_operations
         0 |   struct module * owner
         4 |   fop_flags_t fop_flags
         8 |   loff_t (*)(struct file *, loff_t, int) llseek
        12 |   ssize_t (*)(struct file *, char *, size_t, loff_t *) read
        16 |   ssize_t (*)(struct file *, const char *, size_t, loff_t *) write
        20 |   ssize_t (*)(struct kiocb *, struct iov_iter *) read_iter
        24 |   ssize_t (*)(struct kiocb *, struct iov_iter *) write_iter
        28 |   int (*)(struct kiocb *, struct io_comp_batch *, unsigned int) iopoll
        32 |   int (*)(struct file *, struct dir_context *) iterate_shared
        36 |   __poll_t (*)(struct file *, struct poll_table_struct *) poll
        40 |   long (*)(struct file *, unsigned int, unsigned long) unlocked_ioctl
        44 |   long (*)(struct file *, unsigned int, unsigned long) compat_ioctl
        48 |   int (*)(struct file *, struct vm_area_struct *) mmap
        52 |   int (*)(struct inode *, struct file *) open
        56 |   int (*)(struct file *, fl_owner_t) flush
        60 |   int (*)(struct inode *, struct file *) release
        64 |   int (*)(struct file *, loff_t, loff_t, int) fsync
        68 |   int (*)(int, struct file *, int) fasync
        72 |   int (*)(struct file *, int, struct file_lock *) lock
        76 |   unsigned long (*)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) get_unmapped_area
        80 |   int (*)(int) check_flags
        84 |   int (*)(struct file *, int, struct file_lock *) flock
        88 |   ssize_t (*)(struct pipe_inode_info *, struct file *, loff_t *, size_t, unsigned int) splice_write
        92 |   ssize_t (*)(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int) splice_read
        96 |   void (*)(struct file *) splice_eof
       100 |   int (*)(struct file *, int, struct file_lease **, void **) setlease
       104 |   long (*)(struct file *, int, loff_t, loff_t) fallocate
       108 |   void (*)(struct seq_file *, struct file *) show_fdinfo
       112 |   ssize_t (*)(struct file *, loff_t, struct file *, loff_t, size_t, unsigned int) copy_file_range
       116 |   loff_t (*)(struct file *, loff_t, struct file *, loff_t, loff_t, unsigned int) remap_file_range
       120 |   int (*)(struct file *, loff_t, loff_t, int) fadvise
       124 |   int (*)(struct io_uring_cmd *, unsigned int) uring_cmd
       128 |   int (*)(struct io_uring_cmd *, struct io_comp_batch *, unsigned int) uring_cmd_iopoll
           | [sizeof=132, align=4]

*** Dumping AST Record Layout
         0 | union kiocb::(anonymous at ../include/linux/fs.h:373:2)
         0 |   struct wait_page_queue * ki_waitq
         0 |   ssize_t (*)(void *) dio_complete
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct kiocb
         0 |   struct file * ki_filp
         8 |   loff_t ki_pos
        16 |   void (*)(struct kiocb *, long) ki_complete
        20 |   void * private
        24 |   int ki_flags
        28 |   u16 ki_ioprio
        32 |   union kiocb::(anonymous at ../include/linux/fs.h:373:2) 
        32 |     struct wait_page_queue * ki_waitq
        32 |     ssize_t (*)(void *) dio_complete
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct kstat
         0 |   u32 result_mask
         4 |   umode_t mode
         8 |   unsigned int nlink
        12 |   uint32_t blksize
        16 |   u64 attributes
        24 |   u64 attributes_mask
        32 |   u64 ino
        40 |   dev_t dev
        44 |   dev_t rdev
        48 |   kuid_t uid
        48 |     uid_t val
        52 |   kgid_t gid
        52 |     gid_t val
        56 |   loff_t size
        64 |   struct timespec64 atime
        64 |     time64_t tv_sec
        72 |     long tv_nsec
        80 |   struct timespec64 mtime
        80 |     time64_t tv_sec
        88 |     long tv_nsec
        96 |   struct timespec64 ctime
        96 |     time64_t tv_sec
       104 |     long tv_nsec
       112 |   struct timespec64 btime
       112 |     time64_t tv_sec
       120 |     long tv_nsec
       128 |   u64 blocks
       136 |   u64 mnt_id
       144 |   u32 dio_mem_align
       148 |   u32 dio_offset_align
       152 |   u64 change_cookie
       160 |   u64 subvol
       168 |   u32 atomic_write_unit_min
       172 |   u32 atomic_write_unit_max
       176 |   u32 atomic_write_segments_max
           | [sizeof=184, align=8]

*** Dumping AST Record Layout
         0 | struct dir_context
         0 |   filldir_t actor
         8 |   loff_t pos
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct compat_ipc64_perm
         0 |   compat_key_t key
         4 |   __compat_uid32_t uid
         8 |   __compat_gid32_t gid
        12 |   __compat_uid32_t cuid
        16 |   __compat_gid32_t cgid
        20 |   compat_mode_t mode
        24 |   unsigned char[0] __pad1
        24 |   compat_ushort_t seq
        26 |   compat_ushort_t __pad2
        28 |   compat_ulong_t unused1
        32 |   compat_ulong_t unused2
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct old_timeval32
         0 |   old_time32_t tv_sec
         4 |   s32 tv_usec
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union compat_sigval
         0 |   compat_int_t sival_int
         0 |   compat_uptr_t sival_ptr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union compat_ifreq::(unnamed at ../include/linux/compat.h:363:2)
         0 |   char[16] ifrn_name
           | [sizeof=16, align=1]

*** Dumping AST Record Layout
         0 | struct compat_robust_list
         0 |   compat_uptr_t next
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct tracepoint
         0 |   const char * name
         4 |   struct static_key key
         4 |     atomic_t enabled
         4 |       int counter
         8 |   struct static_call_key * static_call_key
        12 |   void * static_call_tramp
        16 |   void * iterator
        20 |   void * probestub
        24 |   int (*)(void) regfunc
        28 |   void (*)(void) unregfunc
        32 |   struct tracepoint_func * funcs
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | pgd_t
         0 |   unsigned long pgd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | p4d_t
         0 |   pgd_t pgd
         0 |     unsigned long pgd
           | [sizeof=4, aliIn file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:46:
In file included from ../include/rdma/ib_verbs.h:15:
In file included from ../include/linux/ethtool.h:18:
In file included from ../include/linux/if_ether.h:19:
In file included from ../include/linux/skbuff.h:17:
In file included from ../include/linux/bvec.h:10:
In file included from ../include/linux/highmem.h:12:
In file included from ../include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from ../include/asm-generic/hardirq.h:17:
In file included from ../include/linux/irq.h:20:
In file included from ../include/linux/io.h:14:
In file included from ../arch/hexagon/include/asm/io.h:328:
../include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  548 |         val = __raw_readb(PCI_IOBASE + addr);
      |                           ~~~~~~~~~~ ^
../include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  561 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
      |                                                         ~~~~~~~~~~ ^
../include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
   37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
      |                                                   ^
In file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:46:
In file included from ../include/rdma/ib_verbs.h:15:
In file included from ../include/linux/ethtool.h:18:
In file included from ../include/linux/if_ether.h:19:
In file included from ../include/linux/skbuff.h:17:
In file included from ../include/linux/bvec.h:10:
In file included from ../include/linux/highmem.h:12:
In file included from ../include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from ../include/asm-generic/hardirq.h:17:
In file included from ../include/linux/irq.h:20:
In file included from ../include/linux/io.h:14:
In file included from ../arch/hexagon/include/asm/io.h:328:
../include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  574 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
      |                                                         ~~~~~~~~~~ ^
../include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
   35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
      |                                                   ^
In file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:46:
In file included from ../include/rdma/ib_verbs.h:15:
In file included from ../include/linux/ethtool.h:18:
In file included from ../include/linux/if_ether.h:19:
In file included from ../include/linux/skbuff.h:17:
In file included from ../include/linux/bvec.h:10:
In file included from ../include/linux/highmem.h:12:
In file included from ../include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from ../include/asm-generic/hardirq.h:17:
In file included from ../include/linux/irq.h:20:
In file included from ../include/linux/io.h:14:
In file included from ../arch/hexagon/include/asm/io.h:328:
../include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  585 |         __raw_writeb(value, PCI_IOBASE + addr);
      |                             ~~~~~~~~~~ ^
../include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  595 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
      |                                                       ~~~~~~~~~~ ^
../include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
  605 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
      |                                                       ~~~~~~~~~~ ^
gn=4]

*** Dumping AST Record Layout
         0 | pud_t
         0 |   p4d_t p4d
         0 |     pgd_t pgd
         0 |       unsigned long pgd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | pte_t
         0 |   unsigned long pte
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | pmd_t
         0 |   pud_t pud
         0 |     p4d_t p4d
         0 |       pgd_t pgd
         0 |         unsigned long pgd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct range
         0 |   u64 start
         8 |   u64 end
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct dev_pagemap::(unnamed at ../include/linux/memremap.h:139:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct vmem_altmap
         0 |   unsigned long base_pfn
         4 |   const unsigned long end_pfn
         8 |   const unsigned long reserve
        12 |   unsigned long free
        16 |   unsigned long align
        20 |   unsigned long alloc
        24 |   bool inaccessible
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct vm_fault::(anonymous at ../include/linux/mm.h:534:8)
         0 |   struct vm_area_struct * vma
         4 |   gfp_t gfp_mask
         8 |   unsigned long pgoff
        12 |   unsigned long address
        16 |   unsigned long real_address
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct vm_event_state
         0 |   unsigned long[63] event
           | [sizeof=252, align=4]

*** Dumping AST Record Layout
         0 | struct zap_details
         0 |   struct folio * single_folio
         4 |   bool even_cows
         8 |   zap_flags_t zap_flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct vma_iterator
         0 |   struct ma_state mas
         0 |     struct maple_tree * tree
         4 |     unsigned long index
         8 |     unsigned long last
        12 |     struct maple_enode * node
        16 |     unsigned long min
        20 |     unsigned long max
        24 |     struct maple_alloc * alloc
        28 |     enum maple_status status
        32 |     unsigned char depth
        33 |     unsigned char offset
        34 |     unsigned char mas_flags
        35 |     unsigned char end
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct anon_vma_name
         0 |   struct kref kref
         0 |     struct refcount_struct refcount
         0 |       atomic_t refs
         0 |         int counter
         4 |   char[] name
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct static_key_false
         0 |   struct static_key key
         0 |     atomic_t enabled
         0 |       int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct static_key_true
         0 |   struct static_key key
         0 |     atomic_t enabled
         0 |       int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct irq_common_data
         0 |   unsigned int state_use_accessors
         4 |   void * handler_data
         8 |   struct msi_desc * msi_desc
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct irq_data
         0 |   u32 mask
         4 |   unsigned int irq
         8 |   irq_hw_number_t hwirq
        12 |   struct irq_common_data * common
        16 |   struct irq_chip * chip
        20 |   struct irq_domain * domain
        24 |   struct irq_data * parent_data
        28 |   void * chip_data
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct irq_desc
         0 |   struct irq_common_data irq_common_data
         0 |     unsigned int state_use_accessors
         4 |     void * handler_data
         8 |     struct msi_desc * msi_desc
        12 |   struct irq_data irq_data
        12 |     u32 mask
        16 |     unsigned int irq
        20 |     irq_hw_number_t hwirq
        24 |     struct irq_common_data * common
        28 |     struct irq_chip * chip
        32 |     struct irq_domain * domain
        36 |     struct irq_data * parent_data
        40 |     void * chip_data
        44 |   struct irqstat * kstat_irqs
        48 |   irq_flow_handler_t handle_irq
        52 |   struct irqaction * action
        56 |   unsigned int status_use_accessors
        60 |   unsigned int core_internal_state__do_not_mess_with_it
        64 |   unsigned int depth
        68 |   unsigned int wake_depth
        72 |   unsigned int tot_count
        76 |   unsigned int irq_count
        80 |   unsigned long last_unhandled
        84 |   unsigned int irqs_unhandled
        88 |   atomic_t threads_handled
        88 |     int counter
        92 |   int threads_handled_last
        96 |   struct raw_spinlock lock
        96 |     arch_spinlock_t raw_lock
        96 |       volatile unsigned int slock
       100 |     unsigned int magic
       104 |     unsigned int owner_cpu
       108 |     void * owner
       112 |     struct lockdep_map dep_map
       112 |       struct lock_class_key * key
       116 |       struct lock_class *[2] class_cache
       124 |       const char * name
       128 |       u8 wait_type_outer
       129 |       u8 wait_type_inner
       130 |       u8 lock_type
       132 |       int cpu
       136 |       unsigned long ip
       140 |   struct cpumask * percpu_enabled
       144 |   const struct cpumask * percpu_affinity
       148 |   unsigned long threads_oneshot
       152 |   atomic_t threads_active
       152 |     int counter
       156 |   struct wait_queue_head wait_for_threads
       156 |     struct spinlock lock
       156 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       156 |         struct raw_spinlock rlock
       156 |           arch_spinlock_t raw_lock
       156 |             volatile unsigned int slock
       160 |           unsigned int magic
       164 |           unsigned int owner_cpu
       168 |           void * owner
       172 |           struct lockdep_map dep_map
       172 |             struct lock_class_key * key
       176 |             struct lock_class *[2] class_cache
       184 |             const char * name
       188 |             u8 wait_type_outer
       189 |             u8 wait_type_inner
       190 |             u8 lock_type
       192 |             int cpu
       196 |             unsigned long ip
       156 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       156 |           u8[16] __padding
       172 |           struct lockdep_map dep_map
       172 |             struct lock_class_key * key
       176 |             struct lock_class *[2] class_cache
       184 |             const char * name
       188 |             u8 wait_type_outer
       189 |             u8 wait_type_inner
       190 |             u8 lock_type
       192 |             int cpu
       196 |             unsigned long ip
       200 |     struct list_head head
       200 |       struct list_head * next
       204 |       struct list_head * prev
       208 |   struct proc_dir_entry * dir
       212 |   struct mutex request_mutex
       212 |     atomic_t owner
       212 |       int counter
       216 |     struct raw_spinlock wait_lock
       216 |       arch_spinlock_t raw_lock
       216 |         volatile unsigned int slock
       220 |       unsigned int magic
       224 |       unsigned int owner_cpu
       228 |       void * owner
       232 |       struct lockdep_map dep_map
       232 |         struct lock_class_key * key
       236 |         struct lock_class *[2] class_cache
       244 |         const char * name
       248 |         u8 wait_type_outer
       249 |         u8 wait_type_inner
       250 |         u8 lock_type
       252 |         int cpu
       256 |         unsigned long ip
       260 |     struct list_head wait_list
       260 |       struct list_head * next
       264 |       struct list_head * prev
       268 |     void * magic
       272 |     struct lockdep_map dep_map
       272 |       struct lock_class_key * key
       276 |       struct lock_class *[2] class_cache
       284 |       const char * name
       288 |       u8 wait_type_outer
       289 |       u8 wait_type_inner
       290 |       u8 lock_type
       292 |       int cpu
       296 |       unsigned long ip
       300 |   int parent_irq
       304 |   struct module * owner
       308 |   const char * name
           | [sizeof=312, align=4]

*** Dumping AST Record Layout
         0 | struct irq_chip
         0 |   const char * name
         4 |   unsigned int (*)(struct irq_data *) irq_startup
         8 |   void (*)(struct irq_data *) irq_shutdown
        12 |   void (*)(struct irq_data *) irq_enable
        16 |   void (*)(struct irq_data *) irq_disable
        20 |   void (*)(struct irq_data *) irq_ack
        24 |   void (*)(struct irq_data *) irq_mask
        28 |   void (*)(struct irq_data *) irq_mask_ack
        32 |   void (*)(struct irq_data *) irq_unmask
        36 |   void (*)(struct irq_data *) irq_eoi
        40 |   int (*)(struct irq_data *, const struct cpumask *, bool) irq_set_affinity
        44 |   int (*)(struct irq_data *) irq_retrigger
        48 |   int (*)(struct irq_data *, unsigned int) irq_set_type
        52 |   int (*)(struct irq_data *, unsigned int) irq_set_wake
        56 |   void (*)(struct irq_data *) irq_bus_lock
        60 |   void (*)(struct irq_data *) irq_bus_sync_unlock
        64 |   void (*)(struct irq_data *) irq_suspend
        68 |   void (*)(struct irq_data *) irq_resume
        72 |   void (*)(struct irq_data *) irq_pm_shutdown
        76 |   void (*)(struct irq_data *) irq_calc_mask
        80 |   void (*)(struct irq_data *, struct seq_file *) irq_print_chip
        84 |   int (*)(struct irq_data *) irq_request_resources
        88 |   void (*)(struct irq_data *) irq_release_resources
        92 |   void (*)(struct irq_data *, struct msi_msg *) irq_compose_msi_msg
        96 |   void (*)(struct irq_data *, struct msi_msg *) irq_write_msi_msg
       100 |   int (*)(struct irq_data *, enum irqchip_irq_state, bool *) irq_get_irqchip_state
       104 |   int (*)(struct irq_data *, enum irqchip_irq_state, bool) irq_set_irqchip_state
       108 |   int (*)(struct irq_data *, void *) irq_set_vcpu_affinity
       112 |   void (*)(struct irq_data *, unsigned int) ipi_send_single
       116 |   void (*)(struct irq_data *, const struct cpumask *) ipi_send_mask
       120 |   int (*)(struct irq_data *) irq_nmi_setup
       124 |   void (*)(struct irq_data *) irq_nmi_teardown
       128 |   unsigned long flags
           | [sizeof=132, align=4]

*** Dumping AST Record Layout
         0 | struct irq_chip_regs
         0 |   unsigned long enable
         4 |   unsigned long disable
         8 |   unsigned long mask
        12 |   unsigned long ack
        16 |   unsigned long eoi
        20 |   unsigned long type
        24 |   unsigned long polarity
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct irq_chip_type
         0 |   struct irq_chip chip
         0 |     const char * name
         4 |     unsigned int (*)(struct irq_data *) irq_startup
         8 |     void (*)(struct irq_data *) irq_shutdown
        12 |     void (*)(struct irq_data *) irq_enable
        16 |     void (*)(struct irq_data *) irq_disable
        20 |     void (*)(struct irq_data *) irq_ack
        24 |     void (*)(struct irq_data *) irq_mask
        28 |     void (*)(struct irq_data *) irq_mask_ack
        32 |     void (*)(struct irq_data *) irq_unmask
        36 |     void (*)(struct irq_data *) irq_eoi
        40 |     int (*)(struct irq_data *, const struct cpumask *, bool) irq_set_affinity
        44 |     int (*)(struct irq_data *) irq_retrigger
        48 |     int (*)(struct irq_data *, unsigned int) irq_set_type
        52 |     int (*)(struct irq_data *, unsigned int) irq_set_wake
        56 |     void (*)(struct irq_data *) irq_bus_lock
        60 |     void (*)(struct irq_data *) irq_bus_sync_unlock
        64 |     void (*)(struct irq_data *) irq_suspend
        68 |     void (*)(struct irq_data *) irq_resume
        72 |     void (*)(struct irq_data *) irq_pm_shutdown
        76 |     void (*)(struct irq_data *) irq_calc_mask
        80 |     void (*)(struct irq_data *, struct seq_file *) irq_print_chip
        84 |     int (*)(struct irq_data *) irq_request_resources
        88 |     void (*)(struct irq_data *) irq_release_resources
        92 |     void (*)(struct irq_data *, struct msi_msg *) irq_compose_msi_msg
        96 |     void (*)(struct irq_data *, struct msi_msg *) irq_write_msi_msg
       100 |     int (*)(struct irq_data *, enum irqchip_irq_state, bool *) irq_get_irqchip_state
       104 |     int (*)(struct irq_data *, enum irqchip_irq_state, bool) irq_set_irqchip_state
       108 |     int (*)(struct irq_data *, void *) irq_set_vcpu_affinity
       112 |     void (*)(struct irq_data *, unsigned int) ipi_send_single
       116 |     void (*)(struct irq_data *, const struct cpumask *) ipi_send_mask
       120 |     int (*)(struct irq_data *) irq_nmi_setup
       124 |     void (*)(struct irq_data *) irq_nmi_teardown
       128 |     unsigned long flags
       132 |   struct irq_chip_regs regs
       132 |     unsigned long enable
       136 |     unsigned long disable
       140 |     unsigned long mask
       144 |     unsigned long ack
       148 |     unsigned long eoi
       152 |     unsigned long type
       156 |     unsigned long polarity
       160 |   irq_flow_handler_t handler
       164 |   u32 type
       168 |   u32 mask_cache_priv
       172 |   u32 * mask_cache
           | [sizeof=176, align=4]

*** Dumping AST Record Layout
         0 | struct irq_chip_generic
         0 |   struct raw_spinlock lock
         0 |     arch_spinlock_t raw_lock
         0 |       volatile unsigned int slock
         4 |     unsigned int magic
         8 |     unsigned int owner_cpu
        12 |     void * owner
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   void * reg_base
        48 |   u32 (*)(void *) reg_readl
        52 |   void (*)(u32, void *) reg_writel
        56 |   void (*)(struct irq_chip_generic *) suspend
        60 |   void (*)(struct irq_chip_generic *) resume
        64 |   unsigned int irq_base
        68 |   unsigned int irq_cnt
        72 |   u32 mask_cache
        76 |   u32 type_cache
        80 |   u32 polarity_cache
        84 |   u32 wake_enabled
        88 |   u32 wake_active
        92 |   unsigned int num_ct
        96 |   void * private
       100 |   unsigned long installed
       104 |   unsigned long unused
       108 |   struct irq_domain * domain
       112 |   struct list_head list
       112 |     struct list_head * next
       116 |     struct list_head * prev
       120 |   struct irq_chip_type[] chip_types
           | [sizeof=120, align=4]

*** Dumping AST Record Layout
         0 | struct bio_vec
         0 |   struct page * bv_page
         4 |   unsigned int bv_len
         8 |   unsigned int bv_offset
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct sg_table
         0 |   struct scatterlist * sgl
         4 |   unsigned int nents
         8 |   unsigned int orig_nents
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct scatterlist
         0 |   unsigned long page_link
         4 |   unsigned int offset
         8 |   unsigned int length
        12 |   dma_addr_t dma_address
        16 |   unsigned int dma_length
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct sg_page_iter
         0 |   struct scatterlist * sg
         4 |   unsigned int sg_pgoffset
         8 |   unsigned int __nents
        12 |   int __pg_advance
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2)
         0 |   __u8[16] u6_addr8
         0 |   __be16[8] u6_addr16
         0 |   __be32[4] u6_addr32
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct in6_addr
         0 |   union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |     __u8[16] u6_addr8
         0 |     __be16[8] u6_addr16
         0 |     __be32[4] u6_addr32
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | siphash_key_t
         0 |   u64[2] key
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | hsiphash_key_t
         0 |   unsigned long[2] key
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct tc_ratespec
         0 |   unsigned char cell_log
         1 |   __u8 linklayer
         2 |   unsigned short overhead
         4 |   short cell_align
         6 |   unsigned short mpu
         8 |   __u32 rate
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct tc_sfq_qopt
         0 |   unsigned int quantum
         4 |   int perturb_period
         8 |   __u32 limit
        12 |   unsigned int divisor
        16 |   unsigned int flows
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct tc_fq_codel_qd_stats
         0 |   __u32 maxpacket
         4 |   __u32 drop_overlimit
         8 |   __u32 ecn_mark
        12 |   __u32 new_flow_count
        16 |   __u32 new_flows_len
        20 |   __u32 old_flows_len
        24 |   __u32 ce_mark
        28 |   __u32 memory_usage
        32 |   __u32 drop_overmemory
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct tc_u32_key
         0 |   __be32 mask
         4 |   __be32 val
         8 |   int off
        12 |   int offmask
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:70:3)
    0:0-11 |   u16 vlan_id
     1:4-4 |   u16 vlan_dei
     1:5-7 |   u16 vlan_priority
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | union flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:69:2)
         0 |   struct flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:70:3) 
    0:0-11 |     u16 vlan_id
     1:4-4 |     u16 vlan_dei
     1:5-7 |     u16 vlan_priority
         0 |   __be16 vlan_tci
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | struct flow_dissector_mpls_lse
     0:0-7 |   u32 mpls_ttl
     1:0-0 |   u32 mpls_bos
     1:1-3 |   u32 mpls_tc
    1:4-23 |   u32 mpls_label
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_ipv4_addrs
         0 |   __be32 src
         4 |   __be32 dst
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_ipv6_addrs
         0 |   struct in6_addr src
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
        16 |   struct in6_addr dst
        16 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |       __u8[16] u6_addr8
        16 |       __be16[8] u6_addr16
        16 |       __be32[4] u6_addr32
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_tipc
         0 |   __be32 key
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union flow_dissector_key_addrs::(anonymous at ../include/net/flow_dissector.h:157:2)
         0 |   struct flow_dissector_key_ipv4_addrs v4addrs
         0 |     __be32 src
         4 |     __be32 dst
         0 |   struct flow_dissector_key_ipv6_addrs v6addrs
         0 |     struct in6_addr src
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |     struct in6_addr dst
        16 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |         __u8[16] u6_addr8
        16 |         __be16[8] u6_addr16
        16 |         __be32[4] u6_addr32
         0 |   struct flow_dissector_key_tipc tipckey
         0 |     __be32 key
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3)
         0 |   __be16 src
         2 |   __be16 dst
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2)
         0 |   __be32 ports
         0 |   struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         0 |     __be16 src
         2 |     __be16 dst
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_ports
         0 |   union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         0 |     __be32 ports
         0 |     struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         0 |       __be16 src
         2 |       __be16 dst
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_ports_range::(anonymous at ../include/net/flow_dissector.h:205:3)
         0 |   struct flow_dissector_key_ports tp_min
         0 |     union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         0 |       __be32 ports
         0 |       struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         0 |         __be16 src
         2 |         __be16 dst
         4 |   struct flow_dissector_key_ports tp_max
         4 |     union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         4 |       __be32 ports
         4 |       struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         4 |         __be16 src
         6 |         __be16 dst
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union flow_dissector_key_ports_range::(anonymous at ../include/net/flow_dissector.h:203:2)
         0 |   struct flow_dissector_key_ports tp
         0 |     union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         0 |       __be32 ports
         0 |       struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         0 |         __be16 src
         2 |         __be16 dst
         0 |   struct flow_dissector_key_ports_range::(anonymous at ../include/net/flow_dissector.h:205:3) 
         0 |     struct flow_dissector_key_ports tp_min
         0 |       union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         0 |         __be32 ports
         0 |         struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         0 |           __be16 src
         2 |           __be16 dst
         4 |     struct flow_dissector_key_ports tp_max
         4 |       union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
         4 |         __be32 ports
         4 |         struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
         4 |           __be16 src
         6 |           __be16 dst
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_icmp::(anonymous at ../include/net/flow_dissector.h:219:2)
         0 |   u8 type
         1 |   u8 code
           | [sizeof=2, align=1]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_control
         0 |   u16 thoff
         2 |   u16 addr_type
         4 |   u32 flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_basic
         0 |   __be16 n_proto
         2 |   u8 ip_proto
         3 |   u8 padding
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct tpacket_stats
         0 |   unsigned int tp_packets
         4 |   unsigned int tp_drops
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct tpacket_hdr_variant1
         0 |   __u32 tp_rxhash
         4 |   __u32 tp_vlan_tci
         8 |   __u16 tp_vlan_tpid
        10 |   __u16 tp_padding
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union tpacket_bd_ts::(anonymous at ../include/uapi/linux/if_packet.h:187:2)
         0 |   unsigned int ts_usec
         0 |   unsigned int ts_nsec
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct tpacket_bd_ts
         0 |   unsigned int ts_sec
         4 |   union tpacket_bd_ts::(anonymous at ../include/uapi/linux/if_packet.h:187:2) 
         4 |     unsigned int ts_usec
         4 |     unsigned int ts_nsec
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct tpacket_hdr_v1
         0 |   __u32 block_status
         4 |   __u32 num_pkts
         8 |   __u32 offset_to_first_pkt
        12 |   __u32 blk_len
        16 |   __u64 seq_num
        24 |   struct tpacket_bd_ts ts_first_pkt
        24 |     unsigned int ts_sec
        28 |     union tpacket_bd_ts::(anonymous at ../include/uapi/linux/if_packet.h:187:2) 
        28 |       unsigned int ts_usec
        28 |       unsigned int ts_nsec
        32 |   struct tpacket_bd_ts ts_last_pkt
        32 |     unsigned int ts_sec
        36 |     union tpacket_bd_ts::(anonymous at ../include/uapi/linux/if_packet.h:187:2) 
        36 |       unsigned int ts_usec
        36 |       unsigned int ts_nsec
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct tpacket_req
         0 |   unsigned int tp_block_size
         4 |   unsigned int tp_block_nr
         8 |   unsigned int tp_frame_size
        12 |   unsigned int tp_frame_nr
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct flowi_uli::(unnamed at ../include/net/flow.h:48:2)
         0 |   __be16 dport
         2 |   __be16 sport
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct flowi_tunnel
         0 |   __be64 tun_id
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct flowi_common
         0 |   int flowic_oif
         4 |   int flowic_iif
         8 |   int flowic_l3mdev
        12 |   __u32 flowic_mark
        16 |   __u8 flowic_tos
        17 |   __u8 flowic_scope
        18 |   __u8 flowic_proto
        19 |   __u8 flowic_flags
        20 |   __u32 flowic_secid
        24 |   kuid_t flowic_uid
        24 |     uid_t val
        28 |   __u32 flowic_multipath_hash
        32 |   struct flowi_tunnel flowic_tun_key
        32 |     __be64 tun_id
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct flowi_uli::(unnamed at ../include/net/flow.h:53:2)
         0 |   __u8 type
         1 |   __u8 code
           | [sizeof=2, align=1]

*** Dumping AST Record Layout
         0 | struct flowi_uli::(unnamed at ../include/net/flow.h:60:2)
         0 |   __u8 type
           | [sizeof=1, align=1]

*** Dumping AST Record Layout
         0 | union flowi_uli
         0 |   struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
         0 |     __be16 dport
         2 |     __be16 sport
         0 |   struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
         0 |     __u8 type
         1 |     __u8 code
         0 |   __be32 gre_key
         0 |   struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
         0 |     __u8 type
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flowi4
         0 |   struct flowi_common __fl_common
         0 |     int flowic_oif
         4 |     int flowic_iif
         8 |     int flowic_l3mdev
        12 |     __u32 flowic_mark
        16 |     __u8 flowic_tos
        17 |     __u8 flowic_scope
        18 |     __u8 flowic_proto
        19 |     __u8 flowic_flags
        20 |     __u32 flowic_secid
        24 |     kuid_t flowic_uid
        24 |       uid_t val
        28 |     __u32 flowic_multipath_hash
        32 |     struct flowi_tunnel flowic_tun_key
        32 |       __be64 tun_id
        40 |   __be32 saddr
        44 |   __be32 daddr
        48 |   union flowi_uli uli
        48 |     struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        48 |       __be16 dport
        50 |       __be16 sport
        48 |     struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        48 |       __u8 type
        49 |       __u8 code
        48 |     __be32 gre_key
        48 |     struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        48 |       __u8 type
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct flowi6
         0 |   struct flowi_common __fl_common
         0 |     int flowic_oif
         4 |     int flowic_iif
         8 |     int flowic_l3mdev
        12 |     __u32 flowic_mark
        16 |     __u8 flowic_tos
        17 |     __u8 flowic_scope
        18 |     __u8 flowic_proto
        19 |     __u8 flowic_flags
        20 |     __u32 flowic_secid
        24 |     kuid_t flowic_uid
        24 |       uid_t val
        28 |     __u32 flowic_multipath_hash
        32 |     struct flowi_tunnel flowic_tun_key
        32 |       __be64 tun_id
        40 |   struct in6_addr daddr
        40 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |       __u8[16] u6_addr8
        40 |       __be16[8] u6_addr16
        40 |       __be32[4] u6_addr32
        56 |   struct in6_addr saddr
        56 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |       __u8[16] u6_addr8
        56 |       __be16[8] u6_addr16
        56 |       __be32[4] u6_addr32
        72 |   __be32 flowlabel
        76 |   union flowi_uli uli
        76 |     struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        76 |       __be16 dport
        78 |       __be16 sport
        76 |     struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        76 |       __u8 type
        77 |       __u8 code
        76 |     __be32 gre_key
        76 |     struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        76 |       __u8 type
        80 |   __u32 mp_hash
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | union flowi::(unnamed at ../include/net/flow.h:155:2)
         0 |   struct flowi_common __fl_common
         0 |     int flowic_oif
         4 |     int flowic_iif
         8 |     int flowic_l3mdev
        12 |     __u32 flowic_mark
        16 |     __u8 flowic_tos
        17 |     __u8 flowic_scope
        18 |     __u8 flowic_proto
        19 |     __u8 flowic_flags
        20 |     __u32 flowic_secid
        24 |     kuid_t flowic_uid
        24 |       uid_t val
        28 |     __u32 flowic_multipath_hash
        32 |     struct flowi_tunnel flowic_tun_key
        32 |       __be64 tun_id
         0 |   struct flowi4 ip4
         0 |     struct flowi_common __fl_common
         0 |       int flowic_oif
         4 |       int flowic_iif
         8 |       int flowic_l3mdev
        12 |       __u32 flowic_mark
        16 |       __u8 flowic_tos
        17 |       __u8 flowic_scope
        18 |       __u8 flowic_proto
        19 |       __u8 flowic_flags
        20 |       __u32 flowic_secid
        24 |       kuid_t flowic_uid
        24 |         uid_t val
        28 |       __u32 flowic_multipath_hash
        32 |       struct flowi_tunnel flowic_tun_key
        32 |         __be64 tun_id
        40 |     __be32 saddr
        44 |     __be32 daddr
        48 |     union flowi_uli uli
        48 |       struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        48 |         __be16 dport
        50 |         __be16 sport
        48 |       struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        48 |         __u8 type
        49 |         __u8 code
        48 |       __be32 gre_key
        48 |       struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        48 |         __u8 type
         0 |   struct flowi6 ip6
         0 |     struct flowi_common __fl_common
         0 |       int flowic_oif
         4 |       int flowic_iif
         8 |       int flowic_l3mdev
        12 |       __u32 flowic_mark
        16 |       __u8 flowic_tos
        17 |       __u8 flowic_scope
        18 |       __u8 flowic_proto
        19 |       __u8 flowic_flags
        20 |       __u32 flowic_secid
        24 |       kuid_t flowic_uid
        24 |         uid_t val
        28 |       __u32 flowic_multipath_hash
        32 |       struct flowi_tunnel flowic_tun_key
        32 |         __be64 tun_id
        40 |     struct in6_addr daddr
        40 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |         __u8[16] u6_addr8
        40 |         __be16[8] u6_addr16
        40 |         __be32[4] u6_addr32
        56 |     struct in6_addr saddr
        56 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |         __u8[16] u6_addr8
        56 |         __be16[8] u6_addr16
        56 |         __be32[4] u6_addr32
        72 |     __be32 flowlabel
        76 |     union flowi_uli uli
        76 |       struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        76 |         __be16 dport
        78 |         __be16 sport
        76 |       struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        76 |         __u8 type
        77 |         __u8 code
        76 |       __be32 gre_key
        76 |       struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        76 |         __u8 type
        80 |     __u32 mp_hash
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct flowi
         0 |   union flowi::(unnamed at ../include/net/flow.h:155:2) u
         0 |     struct flowi_common __fl_common
         0 |       int flowic_oif
         4 |       int flowic_iif
         8 |       int flowic_l3mdev
        12 |       __u32 flowic_mark
        16 |       __u8 flowic_tos
        17 |       __u8 flowic_scope
        18 |       __u8 flowic_proto
        19 |       __u8 flowic_flags
        20 |       __u32 flowic_secid
        24 |       kuid_t flowic_uid
        24 |         uid_t val
        28 |       __u32 flowic_multipath_hash
        32 |       struct flowi_tunnel flowic_tun_key
        32 |         __be64 tun_id
         0 |     struct flowi4 ip4
         0 |       struct flowi_common __fl_common
         0 |         int flowic_oif
         4 |         int flowic_iif
         8 |         int flowic_l3mdev
        12 |         __u32 flowic_mark
        16 |         __u8 flowic_tos
        17 |         __u8 flowic_scope
        18 |         __u8 flowic_proto
        19 |         __u8 flowic_flags
        20 |         __u32 flowic_secid
        24 |         kuid_t flowic_uid
        24 |           uid_t val
        28 |         __u32 flowic_multipath_hash
        32 |         struct flowi_tunnel flowic_tun_key
        32 |           __be64 tun_id
        40 |       __be32 saddr
        44 |       __be32 daddr
        48 |       union flowi_uli uli
        48 |         struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        48 |           __be16 dport
        50 |           __be16 sport
        48 |         struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        48 |           __u8 type
        49 |           __u8 code
        48 |         __be32 gre_key
        48 |         struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        48 |           __u8 type
         0 |     struct flowi6 ip6
         0 |       struct flowi_common __fl_common
         0 |         int flowic_oif
         4 |         int flowic_iif
         8 |         int flowic_l3mdev
        12 |         __u32 flowic_mark
        16 |         __u8 flowic_tos
        17 |         __u8 flowic_scope
        18 |         __u8 flowic_proto
        19 |         __u8 flowic_flags
        20 |         __u32 flowic_secid
        24 |         kuid_t flowic_uid
        24 |           uid_t val
        28 |         __u32 flowic_multipath_hash
        32 |         struct flowi_tunnel flowic_tun_key
        32 |           __be64 tun_id
        40 |       struct in6_addr daddr
        40 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |           __u8[16] u6_addr8
        40 |           __be16[8] u6_addr16
        40 |           __be32[4] u6_addr32
        56 |       struct in6_addr saddr
        56 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |           __u8[16] u6_addr8
        56 |           __be16[8] u6_addr16
        56 |           __be32[4] u6_addr32
        72 |       __be32 flowlabel
        76 |       union flowi_uli uli
        76 |         struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        76 |           __be16 dport
        78 |           __be16 sport
        76 |         struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        76 |           __u8 type
        77 |           __u8 code
        76 |         __be32 gre_key
        76 |         struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        76 |           __u8 type
        80 |       __u32 mp_hash
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | union tc_skb_ext::(anonymous at ../include/linux/skbuff.h:323:2)
         0 |   u64 act_miss_cookie
         0 |   __u32 chain
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2)
         0 |   struct sk_buff * next
         4 |   struct sk_buff * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff_list
         0 |   struct sk_buff * next
         4 |   struct sk_buff * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2)
         0 |   struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
         0 |     struct sk_buff * next
         4 |     struct sk_buff * prev
         0 |   struct sk_buff_list list
         0 |     struct sk_buff * next
         4 |     struct sk_buff * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union skb_shared_hwtstamps::(anonymous at ../include/linux/skbuff.h:463:2)
         0 |   ktime_t hwtstamp
         0 |   void * netdev_data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ubuf_info_msgzc::(anonymous at ../include/linux/skbuff.h:553:3)
         0 |   unsigned long desc
         4 |   void * ctx
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ubuf_info
         0 |   const struct ubuf_info_ops * ops
         4 |   struct refcount_struct refcnt
         4 |     atomic_t refs
         4 |       int counter
         8 |   u8 flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct skb_shared_hwtstamps
         0 |   union skb_shared_hwtstamps::(anonymous at ../include/linux/skbuff.h:463:2) 
         0 |     ktime_t hwtstamp
         0 |     void * netdev_data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct skb_frag
         0 |   netmem_ref netmem
         4 |   unsigned int len
         8 |   unsigned int offset
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4)
         0 |   struct net_device * dev
         0 |   unsigned long dev_scratch
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:866:3)
         0 |   struct sk_buff * next
         4 |   struct sk_buff * prev
         8 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4) 
         8 |     struct net_device * dev
         8 |     unsigned long dev_scratch
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:900:3)
         0 |   unsigned long _skb_refdst
         4 |   void (*)(struct sk_buff *) destructor
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __u16 csum_start
         2 |   __u16 csum_offset
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __wsum csum
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         0 |     __u16 csum_start
         2 |     __u16 csum_offset
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __be16 vlan_proto
         2 |   __u16 vlan_tci
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   u32 vlan_all
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         0 |     __be16 vlan_proto
         2 |     __u16 vlan_tci
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   unsigned int napi_id
         0 |   unsigned int sender_cpu
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __u32 mark
         0 |   __u32 reserved_tailroom
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __be16 inner_protocol
         0 |   __u8 inner_ipproto
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __u8[0] __pkt_type_offset
     0:0-2 |   __u8 pkt_type
     0:3-3 |   __u8 ignore_df
     0:4-4 |   __u8 dst_pending_confirm
     0:5-6 |   __u8 ip_summed
     0:7-7 |   __u8 ooo_okay
         1 |   __u8[0] __mono_tc_offset
     1:0-1 |   __u8 tstamp_type
     1:2-2 |   __u8 tc_at_ingress
     1:3-3 |   __u8 tc_skip_classify
     1:4-4 |   __u8 remcsum_offload
     1:5-5 |   __u8 csum_complete_sw
     1:6-7 |   __u8 csum_level
     2:0-0 |   __u8 inner_protocol_type
     2:1-1 |   __u8 l4_hash
     2:2-2 |   __u8 sw_hash
     2:3-3 |   __u8 wifi_acked_valid
     2:4-4 |   __u8 wifi_acked
     2:5-5 |   __u8 no_fcs
     2:6-6 |   __u8 encapsulation
     2:7-7 |   __u8 encap_hdr_csum
     3:0-0 |   __u8 csum_valid
     3:1-2 |   __u8 ndisc_nodetype
     3:3-3 |   __u8 redirected
     3:4-4 |   __u8 decrypted
     3:5-5 |   __u8 slow_gro
     3:6-6 |   __u8 csum_not_inet
         4 |   __u16 tc_index
         6 |   u16 alloc_cpu
         8 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |     __wsum csum
         8 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |       __u16 csum_start
        10 |       __u16 csum_offset
        12 |   __u32 priority
        16 |   int skb_iif
        20 |   __u32 hash
        24 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |     u32 vlan_all
        24 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |       __be16 vlan_proto
        26 |       __u16 vlan_tci
        28 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        28 |     unsigned int napi_id
        28 |     unsigned int sender_cpu
        32 |   __u32 secmark
        36 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        36 |     __u32 mark
        36 |     __u32 reserved_tailroom
        40 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        40 |     __be16 inner_protocol
        40 |     __u8 inner_ipproto
        42 |   __u16 inner_transport_header
        44 |   __u16 inner_network_header
        46 |   __u16 inner_mac_header
        48 |   __be16 protocol
        50 |   __u16 transport_header
        52 |   __u16 network_header
        54 |   __u16 mac_header
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:865:2)
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:866:3) 
         0 |     struct sk_buff * next
         4 |     struct sk_buff * prev
         8 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4) 
         8 |       struct net_device * dev
         8 |       unsigned long dev_scratch
         0 |   struct rb_node rbnode
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   struct llist_node ll_node
         0 |     struct llist_node * next
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:887:2)
         0 |   ktime_t tstamp
         0 |   u64 skb_mstamp_ns
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:899:2)
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:900:3) 
         0 |     unsigned long _skb_refdst
         4 |     void (*)(struct sk_buff *) destructor
         0 |   struct list_head tcp_tsorted_anchor
         0 |     struct list_head * next
         4 |     struct list_head * prev
         0 |   unsigned long _sk_redir
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __u16 csum_start
         2 |   __u16 csum_offset
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __wsum csum
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         0 |     __u16 csum_start
         2 |     __u16 csum_offset
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __be16 vlan_proto
         2 |   __u16 vlan_tci
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   u32 vlan_all
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         0 |     __be16 vlan_proto
         2 |     __u16 vlan_tci
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   unsigned int napi_id
         0 |   unsigned int sender_cpu
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __u32 mark
         0 |   __u32 reserved_tailroom
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   __be16 inner_protocol
         0 |   __u8 inner_ipproto
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | struct sk_buff::(unnamed at ../include/linux/skbuff.h:948:2)
         0 |   __u8[0] __pkt_type_offset
     0:0-2 |   __u8 pkt_type
     0:3-3 |   __u8 ignore_df
     0:4-4 |   __u8 dst_pending_confirm
     0:5-6 |   __u8 ip_summed
     0:7-7 |   __u8 ooo_okay
         1 |   __u8[0] __mono_tc_offset
     1:0-1 |   __u8 tstamp_type
     1:2-2 |   __u8 tc_at_ingress
     1:3-3 |   __u8 tc_skip_classify
     1:4-4 |   __u8 remcsum_offload
     1:5-5 |   __u8 csum_complete_sw
     1:6-7 |   __u8 csum_level
     2:0-0 |   __u8 inner_protocol_type
     2:1-1 |   __u8 l4_hash
     2:2-2 |   __u8 sw_hash
     2:3-3 |   __u8 wifi_acked_valid
     2:4-4 |   __u8 wifi_acked
     2:5-5 |   __u8 no_fcs
     2:6-6 |   __u8 encapsulation
     2:7-7 |   __u8 encap_hdr_csum
     3:0-0 |   __u8 csum_valid
     3:1-2 |   __u8 ndisc_nodetype
     3:3-3 |   __u8 redirected
     3:4-4 |   __u8 decrypted
     3:5-5 |   __u8 slow_gro
     3:6-6 |   __u8 csum_not_inet
         4 |   __u16 tc_index
         6 |   u16 alloc_cpu
         8 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |     __wsum csum
         8 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |       __u16 csum_start
        10 |       __u16 csum_offset
        12 |   __u32 priority
        16 |   int skb_iif
        20 |   __u32 hash
        24 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |     u32 vlan_all
        24 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |       __be16 vlan_proto
        26 |       __u16 vlan_tci
        28 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        28 |     unsigned int napi_id
        28 |     unsigned int sender_cpu
        32 |   __u32 secmark
        36 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        36 |     __u32 mark
        36 |     __u32 reserved_tailroom
        40 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        40 |     __be16 inner_protocol
        40 |     __u8 inner_ipproto
        42 |   __u16 inner_transport_header
        44 |   __u16 inner_network_header
        46 |   __u16 inner_mac_header
        48 |   __be16 protocol
        50 |   __u16 transport_header
        52 |   __u16 network_header
        54 |   __u16 mac_header
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2)
         0 |   struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         0 |     __u8[0] __pkt_type_offset
     0:0-2 |     __u8 pkt_type
     0:3-3 |     __u8 ignore_df
     0:4-4 |     __u8 dst_pending_confirm
     0:5-6 |     __u8 ip_summed
     0:7-7 |     __u8 ooo_okay
         1 |     __u8[0] __mono_tc_offset
     1:0-1 |     __u8 tstamp_type
     1:2-2 |     __u8 tc_at_ingress
     1:3-3 |     __u8 tc_skip_classify
     1:4-4 |     __u8 remcsum_offload
     1:5-5 |     __u8 csum_complete_sw
     1:6-7 |     __u8 csum_level
     2:0-0 |     __u8 inner_protocol_type
     2:1-1 |     __u8 l4_hash
     2:2-2 |     __u8 sw_hash
     2:3-3 |     __u8 wifi_acked_valid
     2:4-4 |     __u8 wifi_acked
     2:5-5 |     __u8 no_fcs
     2:6-6 |     __u8 encapsulation
     2:7-7 |     __u8 encap_hdr_csum
     3:0-0 |     __u8 csum_valid
     3:1-2 |     __u8 ndisc_nodetype
     3:3-3 |     __u8 redirected
     3:4-4 |     __u8 decrypted
     3:5-5 |     __u8 slow_gro
     3:6-6 |     __u8 csum_not_inet
         4 |     __u16 tc_index
         6 |     u16 alloc_cpu
         8 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |       __wsum csum
         8 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |         __u16 csum_start
        10 |         __u16 csum_offset
        12 |     __u32 priority
        16 |     int skb_iif
        20 |     __u32 hash
        24 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |       u32 vlan_all
        24 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |         __be16 vlan_proto
        26 |         __u16 vlan_tci
        28 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        28 |       unsigned int napi_id
        28 |       unsigned int sender_cpu
        32 |     __u32 secmark
        36 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        36 |       __u32 mark
        36 |       __u32 reserved_tailroom
        40 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        40 |       __be16 inner_protocol
        40 |       __u8 inner_ipproto
        42 |     __u16 inner_transport_header
        44 |     __u16 inner_network_header
        46 |     __u16 inner_mac_header
        48 |     __be16 protocol
        50 |     __u16 transport_header
        52 |     __u16 network_header
        54 |     __u16 mac_header
         0 |   struct sk_buff::(unnamed at ../include/linux/skbuff.h:948:2) headers
         0 |     __u8[0] __pkt_type_offset
     0:0-2 |     __u8 pkt_type
     0:3-3 |     __u8 ignore_df
     0:4-4 |     __u8 dst_pending_confirm
     0:5-6 |     __u8 ip_summed
     0:7-7 |     __u8 ooo_okay
         1 |     __u8[0] __mono_tc_offset
     1:0-1 |     __u8 tstamp_type
     1:2-2 |     __u8 tc_at_ingress
     1:3-3 |     __u8 tc_skip_classify
     1:4-4 |     __u8 remcsum_offload
     1:5-5 |     __u8 csum_complete_sw
     1:6-7 |     __u8 csum_level
     2:0-0 |     __u8 inner_protocol_type
     2:1-1 |     __u8 l4_hash
     2:2-2 |     __u8 sw_hash
     2:3-3 |     __u8 wifi_acked_valid
     2:4-4 |     __u8 wifi_acked
     2:5-5 |     __u8 no_fcs
     2:6-6 |     __u8 encapsulation
     2:7-7 |     __u8 encap_hdr_csum
     3:0-0 |     __u8 csum_valid
     3:1-2 |     __u8 ndisc_nodetype
     3:3-3 |     __u8 redirected
     3:4-4 |     __u8 decrypted
     3:5-5 |     __u8 slow_gro
     3:6-6 |     __u8 csum_not_inet
         4 |     __u16 tc_index
         6 |     u16 alloc_cpu
         8 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |       __wsum csum
         8 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
         8 |         __u16 csum_start
        10 |         __u16 csum_offset
        12 |     __u32 priority
        16 |     int skb_iif
        20 |     __u32 hash
        24 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |       u32 vlan_all
        24 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        24 |         __be16 vlan_proto
        26 |         __u16 vlan_tci
        28 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        28 |       unsigned int napi_id
        28 |       unsigned int sender_cpu
        32 |     __u32 secmark
        36 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        36 |       __u32 mark
        36 |       __u32 reserved_tailroom
        40 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        40 |       __be16 inner_protocol
        40 |       __u8 inner_ipproto
        42 |     __u16 inner_transport_header
        44 |     __u16 inner_network_header
        46 |     __u16 inner_mac_header
        48 |     __be16 protocol
        50 |     __u16 transport_header
        52 |     __u16 network_header
        54 |     __u16 mac_header
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct sk_buff
         0 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:865:2) 
         0 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:866:3) 
         0 |       struct sk_buff * next
         4 |       struct sk_buff * prev
         8 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4) 
         8 |         struct net_device * dev
         8 |         unsigned long dev_scratch
         0 |     struct rb_node rbnode
         0 |       unsigned long __rb_parent_color
         4 |       struct rb_node * rb_right
         8 |       struct rb_node * rb_left
         0 |     struct list_head list
         0 |       struct list_head * next
         4 |       struct list_head * prev
         0 |     struct llist_node ll_node
         0 |       struct llist_node * next
        12 |   struct sock * sk
        16 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:887:2) 
        16 |     ktime_t tstamp
        16 |     u64 skb_mstamp_ns
        24 |   char[48] cb
        72 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:899:2) 
        72 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:900:3) 
        72 |       unsigned long _skb_refdst
        76 |       void (*)(struct sk_buff *) destructor
        72 |     struct list_head tcp_tsorted_anchor
        72 |       struct list_head * next
        76 |       struct list_head * prev
        72 |     unsigned long _sk_redir
        80 |   unsigned int len
        84 |   unsigned int data_len
        88 |   __u16 mac_len
        90 |   __u16 hdr_len
        92 |   __u16 queue_mapping
        94 |   __u8[0] __cloned_offset
    94:0-0 |   __u8 cloned
    94:1-1 |   __u8 nohdr
    94:2-3 |   __u8 fclone
    94:4-4 |   __u8 peeked
    94:5-5 |   __u8 head_frag
    94:6-6 |   __u8 pfmemalloc
    94:7-7 |   __u8 pp_recycle
        95 |   __u8 active_extensions
        96 |   union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        96 |     struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        96 |       __u8[0] __pkt_type_offset
    96:0-2 |       __u8 pkt_type
    96:3-3 |       __u8 ignore_df
    96:4-4 |       __u8 dst_pending_confirm
    96:5-6 |       __u8 ip_summed
    96:7-7 |       __u8 ooo_okay
        97 |       __u8[0] __mono_tc_offset
    97:0-1 |       __u8 tstamp_type
    97:2-2 |       __u8 tc_at_ingress
    97:3-3 |       __u8 tc_skip_classify
    97:4-4 |       __u8 remcsum_offload
    97:5-5 |       __u8 csum_complete_sw
    97:6-7 |       __u8 csum_level
    98:0-0 |       __u8 inner_protocol_type
    98:1-1 |       __u8 l4_hash
    98:2-2 |       __u8 sw_hash
    98:3-3 |       __u8 wifi_acked_valid
    98:4-4 |       __u8 wifi_acked
    98:5-5 |       __u8 no_fcs
    98:6-6 |       __u8 encapsulation
    98:7-7 |       __u8 encap_hdr_csum
    99:0-0 |       __u8 csum_valid
    99:1-2 |       __u8 ndisc_nodetype
    99:3-3 |       __u8 redirected
    99:4-4 |       __u8 decrypted
    99:5-5 |       __u8 slow_gro
    99:6-6 |       __u8 csum_not_inet
       100 |       __u16 tc_index
       102 |       u16 alloc_cpu
       104 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |         __wsum csum
       104 |         struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |           __u16 csum_start
       106 |           __u16 csum_offset
       108 |       __u32 priority
       112 |       int skb_iif
       116 |       __u32 hash
       120 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |         u32 vlan_all
       120 |         struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |           __be16 vlan_proto
       122 |           __u16 vlan_tci
       124 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       124 |         unsigned int napi_id
       124 |         unsigned int sender_cpu
       128 |       __u32 secmark
       132 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       132 |         __u32 mark
       132 |         __u32 reserved_tailroom
       136 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       136 |         __be16 inner_protocol
       136 |         __u8 inner_ipproto
       138 |       __u16 inner_transport_header
       140 |       __u16 inner_network_header
       142 |       __u16 inner_mac_header
       144 |       __be16 protocol
       146 |       __u16 transport_header
       148 |       __u16 network_header
       150 |       __u16 mac_header
        96 |     struct sk_buff::(unnamed at ../include/linux/skbuff.h:948:2) headers
        96 |       __u8[0] __pkt_type_offset
    96:0-2 |       __u8 pkt_type
    96:3-3 |       __u8 ignore_df
    96:4-4 |       __u8 dst_pending_confirm
    96:5-6 |       __u8 ip_summed
    96:7-7 |       __u8 ooo_okay
        97 |       __u8[0] __mono_tc_offset
    97:0-1 |       __u8 tstamp_type
    97:2-2 |       __u8 tc_at_ingress
    97:3-3 |       __u8 tc_skip_classify
    97:4-4 |       __u8 remcsum_offload
    97:5-5 |       __u8 csum_complete_sw
    97:6-7 |       __u8 csum_level
    98:0-0 |       __u8 inner_protocol_type
    98:1-1 |       __u8 l4_hash
    98:2-2 |       __u8 sw_hash
    98:3-3 |       __u8 wifi_acked_valid
    98:4-4 |       __u8 wifi_acked
    98:5-5 |       __u8 no_fcs
    98:6-6 |       __u8 encapsulation
    98:7-7 |       __u8 encap_hdr_csum
    99:0-0 |       __u8 csum_valid
    99:1-2 |       __u8 ndisc_nodetype
    99:3-3 |       __u8 redirected
    99:4-4 |       __u8 decrypted
    99:5-5 |       __u8 slow_gro
    99:6-6 |       __u8 csum_not_inet
       100 |       __u16 tc_index
       102 |       u16 alloc_cpu
       104 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |         __wsum csum
       104 |         struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |           __u16 csum_start
       106 |           __u16 csum_offset
       108 |       __u32 priority
       112 |       int skb_iif
       116 |       __u32 hash
       120 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |         u32 vlan_all
       120 |         struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |           __be16 vlan_proto
       122 |           __u16 vlan_tci
       124 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       124 |         unsigned int napi_id
       124 |         unsigned int sender_cpu
       128 |       __u32 secmark
       132 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       132 |         __u32 mark
       132 |         __u32 reserved_tailroom
       136 |       union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       136 |         __be16 inner_protocol
       136 |         __u8 inner_ipproto
       138 |       __u16 inner_transport_header
       140 |       __u16 inner_network_header
       142 |       __u16 inner_mac_header
       144 |       __be16 protocol
       146 |       __u16 transport_header
       148 |       __u16 network_header
       150 |       __u16 mac_header
       152 |   sk_buff_data_t tail
       156 |   sk_buff_data_t end
       160 |   unsigned char * head
       164 |   unsigned char * data
       168 |   unsigned int truesize
       172 |   struct refcount_struct users
       172 |     atomic_t refs
       172 |       int counter
       176 |   struct skb_ext * extensions
           | [sizeof=184, align=8]

*** Dumping AST Record Layout
         0 | struct sk_buff_fclones
         0 |   struct sk_buff skb1
         0 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:865:2) 
         0 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:866:3) 
         0 |         struct sk_buff * next
         4 |         struct sk_buff * prev
         8 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4) 
         8 |           struct net_device * dev
         8 |           unsigned long dev_scratch
         0 |       struct rb_node rbnode
         0 |         unsigned long __rb_parent_color
         4 |         struct rb_node * rb_right
         8 |         struct rb_node * rb_left
         0 |       struct list_head list
         0 |         struct list_head * next
         4 |         struct list_head * prev
         0 |       struct llist_node ll_node
         0 |         struct llist_node * next
        12 |     struct sock * sk
        16 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:887:2) 
        16 |       ktime_t tstamp
        16 |       u64 skb_mstamp_ns
        24 |     char[48] cb
        72 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:899:2) 
        72 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:900:3) 
        72 |         unsigned long _skb_refdst
        76 |         void (*)(struct sk_buff *) destructor
        72 |       struct list_head tcp_tsorted_anchor
        72 |         struct list_head * next
        76 |         struct list_head * prev
        72 |       unsigned long _sk_redir
        80 |     unsigned int len
        84 |     unsigned int data_len
        88 |     __u16 mac_len
        90 |     __u16 hdr_len
        92 |     __u16 queue_mapping
        94 |     __u8[0] __cloned_offset
    94:0-0 |     __u8 cloned
    94:1-1 |     __u8 nohdr
    94:2-3 |     __u8 fclone
    94:4-4 |     __u8 peeked
    94:5-5 |     __u8 head_frag
    94:6-6 |     __u8 pfmemalloc
    94:7-7 |     __u8 pp_recycle
        95 |     __u8 active_extensions
        96 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        96 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
        96 |         __u8[0] __pkt_type_offset
    96:0-2 |         __u8 pkt_type
    96:3-3 |         __u8 ignore_df
    96:4-4 |         __u8 dst_pending_confirm
    96:5-6 |         __u8 ip_summed
    96:7-7 |         __u8 ooo_okay
        97 |         __u8[0] __mono_tc_offset
    97:0-1 |         __u8 tstamp_type
    97:2-2 |         __u8 tc_at_ingress
    97:3-3 |         __u8 tc_skip_classify
    97:4-4 |         __u8 remcsum_offload
    97:5-5 |         __u8 csum_complete_sw
    97:6-7 |         __u8 csum_level
    98:0-0 |         __u8 inner_protocol_type
    98:1-1 |         __u8 l4_hash
    98:2-2 |         __u8 sw_hash
    98:3-3 |         __u8 wifi_acked_valid
    98:4-4 |         __u8 wifi_acked
    98:5-5 |         __u8 no_fcs
    98:6-6 |         __u8 encapsulation
    98:7-7 |         __u8 encap_hdr_csum
    99:0-0 |         __u8 csum_valid
    99:1-2 |         __u8 ndisc_nodetype
    99:3-3 |         __u8 redirected
    99:4-4 |         __u8 decrypted
    99:5-5 |         __u8 slow_gro
    99:6-6 |         __u8 csum_not_inet
       100 |         __u16 tc_index
       102 |         u16 alloc_cpu
       104 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |           __wsum csum
       104 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |             __u16 csum_start
       106 |             __u16 csum_offset
       108 |         __u32 priority
       112 |         int skb_iif
       116 |         __u32 hash
       120 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |           u32 vlan_all
       120 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |             __be16 vlan_proto
       122 |             __u16 vlan_tci
       124 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       124 |           unsigned int napi_id
       124 |           unsigned int sender_cpu
       128 |         __u32 secmark
       132 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       132 |           __u32 mark
       132 |           __u32 reserved_tailroom
       136 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       136 |           __be16 inner_protocol
       136 |           __u8 inner_ipproto
       138 |         __u16 inner_transport_header
       140 |         __u16 inner_network_header
       142 |         __u16 inner_mac_header
       144 |         __be16 protocol
       146 |         __u16 transport_header
       148 |         __u16 network_header
       150 |         __u16 mac_header
        96 |       struct sk_buff::(unnamed at ../include/linux/skbuff.h:948:2) headers
        96 |         __u8[0] __pkt_type_offset
    96:0-2 |         __u8 pkt_type
    96:3-3 |         __u8 ignore_df
    96:4-4 |         __u8 dst_pending_confirm
    96:5-6 |         __u8 ip_summed
    96:7-7 |         __u8 ooo_okay
        97 |         __u8[0] __mono_tc_offset
    97:0-1 |         __u8 tstamp_type
    97:2-2 |         __u8 tc_at_ingress
    97:3-3 |         __u8 tc_skip_classify
    97:4-4 |         __u8 remcsum_offload
    97:5-5 |         __u8 csum_complete_sw
    97:6-7 |         __u8 csum_level
    98:0-0 |         __u8 inner_protocol_type
    98:1-1 |         __u8 l4_hash
    98:2-2 |         __u8 sw_hash
    98:3-3 |         __u8 wifi_acked_valid
    98:4-4 |         __u8 wifi_acked
    98:5-5 |         __u8 no_fcs
    98:6-6 |         __u8 encapsulation
    98:7-7 |         __u8 encap_hdr_csum
    99:0-0 |         __u8 csum_valid
    99:1-2 |         __u8 ndisc_nodetype
    99:3-3 |         __u8 redirected
    99:4-4 |         __u8 decrypted
    99:5-5 |         __u8 slow_gro
    99:6-6 |         __u8 csum_not_inet
       100 |         __u16 tc_index
       102 |         u16 alloc_cpu
       104 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |           __wsum csum
       104 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       104 |             __u16 csum_start
       106 |             __u16 csum_offset
       108 |         __u32 priority
       112 |         int skb_iif
       116 |         __u32 hash
       120 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |           u32 vlan_all
       120 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       120 |             __be16 vlan_proto
       122 |             __u16 vlan_tci
       124 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       124 |           unsigned int napi_id
       124 |           unsigned int sender_cpu
       128 |         __u32 secmark
       132 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       132 |           __u32 mark
       132 |           __u32 reserved_tailroom
       136 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       136 |           __be16 inner_protocol
       136 |           __u8 inner_ipproto
       138 |         __u16 inner_transport_header
       140 |         __u16 inner_network_header
       142 |         __u16 inner_mac_header
       144 |         __be16 protocol
       146 |         __u16 transport_header
       148 |         __u16 network_header
       150 |         __u16 mac_header
       152 |     sk_buff_data_t tail
       156 |     sk_buff_data_t end
       160 |     unsigned char * head
       164 |     unsigned char * data
       168 |     unsigned int truesize
       172 |     struct refcount_struct users
       172 |       atomic_t refs
       172 |         int counter
       176 |     struct skb_ext * extensions
       184 |   struct sk_buff skb2
       184 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:865:2) 
       184 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:866:3) 
       184 |         struct sk_buff * next
       188 |         struct sk_buff * prev
       192 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:871:4) 
       192 |           struct net_device * dev
       192 |           unsigned long dev_scratch
       184 |       struct rb_node rbnode
       184 |         unsigned long __rb_parent_color
       188 |         struct rb_node * rb_right
       192 |         struct rb_node * rb_left
       184 |       struct list_head list
       184 |         struct list_head * next
       188 |         struct list_head * prev
       184 |       struct llist_node ll_node
       184 |         struct llist_node * next
       196 |     struct sock * sk
       200 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:887:2) 
       200 |       ktime_t tstamp
       200 |       u64 skb_mstamp_ns
       208 |     char[48] cb
       256 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:899:2) 
       256 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:900:3) 
       256 |         unsigned long _skb_refdst
       260 |         void (*)(struct sk_buff *) destructor
       256 |       struct list_head tcp_tsorted_anchor
       256 |         struct list_head * next
       260 |         struct list_head * prev
       256 |       unsigned long _sk_redir
       264 |     unsigned int len
       268 |     unsigned int data_len
       272 |     __u16 mac_len
       274 |     __u16 hdr_len
       276 |     __u16 queue_mapping
       278 |     __u8[0] __cloned_offset
   278:0-0 |     __u8 cloned
   278:1-1 |     __u8 nohdr
   278:2-3 |     __u8 fclone
   278:4-4 |     __u8 peeked
   278:5-5 |     __u8 head_frag
   278:6-6 |     __u8 pfmemalloc
   278:7-7 |     __u8 pp_recycle
       279 |     __u8 active_extensions
       280 |     union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       280 |       struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       280 |         __u8[0] __pkt_type_offset
   280:0-2 |         __u8 pkt_type
   280:3-3 |         __u8 ignore_df
   280:4-4 |         __u8 dst_pending_confirm
   280:5-6 |         __u8 ip_summed
   280:7-7 |         __u8 ooo_okay
       281 |         __u8[0] __mono_tc_offset
   281:0-1 |         __u8 tstamp_type
   281:2-2 |         __u8 tc_at_ingress
   281:3-3 |         __u8 tc_skip_classify
   281:4-4 |         __u8 remcsum_offload
   281:5-5 |         __u8 csum_complete_sw
   281:6-7 |         __u8 csum_level
   282:0-0 |         __u8 inner_protocol_type
   282:1-1 |         __u8 l4_hash
   282:2-2 |         __u8 sw_hash
   282:3-3 |         __u8 wifi_acked_valid
   282:4-4 |         __u8 wifi_acked
   282:5-5 |         __u8 no_fcs
   282:6-6 |         __u8 encapsulation
   282:7-7 |         __u8 encap_hdr_csum
   283:0-0 |         __u8 csum_valid
   283:1-2 |         __u8 ndisc_nodetype
   283:3-3 |         __u8 redirected
   283:4-4 |         __u8 decrypted
   283:5-5 |         __u8 slow_gro
   283:6-6 |         __u8 csum_not_inet
       284 |         __u16 tc_index
       286 |         u16 alloc_cpu
       288 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       288 |           __wsum csum
       288 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       288 |             __u16 csum_start
       290 |             __u16 csum_offset
       292 |         __u32 priority
       296 |         int skb_iif
       300 |         __u32 hash
       304 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       304 |           u32 vlan_all
       304 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       304 |             __be16 vlan_proto
       306 |             __u16 vlan_tci
       308 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       308 |           unsigned int napi_id
       308 |           unsigned int sender_cpu
       312 |         __u32 secmark
       316 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       316 |           __u32 mark
       316 |           __u32 reserved_tailroom
       320 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       320 |           __be16 inner_protocol
       320 |           __u8 inner_ipproto
       322 |         __u16 inner_transport_header
       324 |         __u16 inner_network_header
       326 |         __u16 inner_mac_header
       328 |         __be16 protocol
       330 |         __u16 transport_header
       332 |         __u16 network_header
       334 |         __u16 mac_header
       280 |       struct sk_buff::(unnamed at ../include/linux/skbuff.h:948:2) headers
       280 |         __u8[0] __pkt_type_offset
   280:0-2 |         __u8 pkt_type
   280:3-3 |         __u8 ignore_df
   280:4-4 |         __u8 dst_pending_confirm
   280:5-6 |         __u8 ip_summed
   280:7-7 |         __u8 ooo_okay
       281 |         __u8[0] __mono_tc_offset
   281:0-1 |         __u8 tstamp_type
   281:2-2 |         __u8 tc_at_ingress
   281:3-3 |         __u8 tc_skip_classify
   281:4-4 |         __u8 remcsum_offload
   281:5-5 |         __u8 csum_complete_sw
   281:6-7 |         __u8 csum_level
   282:0-0 |         __u8 inner_protocol_type
   282:1-1 |         __u8 l4_hash
   282:2-2 |         __u8 sw_hash
   282:3-3 |         __u8 wifi_acked_valid
   282:4-4 |         __u8 wifi_acked
   282:5-5 |         __u8 no_fcs
   282:6-6 |         __u8 encapsulation
   282:7-7 |         __u8 encap_hdr_csum
   283:0-0 |         __u8 csum_valid
   283:1-2 |         __u8 ndisc_nodetype
   283:3-3 |         __u8 redirected
   283:4-4 |         __u8 decrypted
   283:5-5 |         __u8 slow_gro
   283:6-6 |         __u8 csum_not_inet
       284 |         __u16 tc_index
       286 |         u16 alloc_cpu
       288 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       288 |           __wsum csum
       288 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       288 |             __u16 csum_start
       290 |             __u16 csum_offset
       292 |         __u32 priority
       296 |         int skb_iif
       300 |         __u32 hash
       304 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       304 |           u32 vlan_all
       304 |           struct sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       304 |             __be16 vlan_proto
       306 |             __u16 vlan_tci
       308 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       308 |           unsigned int napi_id
       308 |           unsigned int sender_cpu
       312 |         __u32 secmark
       316 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       316 |           __u32 mark
       316 |           __u32 reserved_tailroom
       320 |         union sk_buff::(anonymous at ../include/linux/skbuff.h:948:2) 
       320 |           __be16 inner_protocol
       320 |           __u8 inner_ipproto
       322 |         __u16 inner_transport_header
       324 |         __u16 inner_network_header
       326 |         __u16 inner_mac_header
       328 |         __be16 protocol
       330 |         __u16 transport_header
       332 |         __u16 network_header
       334 |         __u16 mac_header
       336 |     sk_buff_data_t tail
       340 |     sk_buff_data_t end
       344 |     unsigned char * head
       348 |     unsigned char * data
       352 |     unsigned int truesize
       356 |     struct refcount_struct users
       356 |       atomic_t refs
       356 |         int counter
       360 |     struct skb_ext * extensions
       368 |   struct refcount_struct fclone_ref
       368 |     atomic_t refs
       368 |       int counter
           | [sizeof=376, align=8]

*** Dumping AST Record Layout
         0 | struct flow_dissector
         0 |   unsigned long long used_keys
         8 |   unsigned short[33] offset
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_tags
         0 |   u32 flow_label
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_vlan
         0 |   union flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:69:2) 
         0 |     struct flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:70:3) 
    0:0-11 |       u16 vlan_id
     1:4-4 |       u16 vlan_dei
     1:5-7 |       u16 vlan_priority
         0 |     __be16 vlan_tci
         2 |   __be16 vlan_tpid
         4 |   __be16 vlan_eth_type
         6 |   u16 padding
           | [sizeof=8, align=2]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_keyid
         0 |   __be32 keyid
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_icmp
         0 |   struct flow_dissector_key_icmp::(anonymous at ../include/net/flow_dissector.h:219:2) 
         0 |     u8 type
         1 |     u8 code
         2 |   u16 id
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct flow_dissector_key_addrs
         0 |   union flow_dissector_key_addrs::(anonymous at ../include/net/flow_dissector.h:157:2) 
         0 |     struct flow_dissector_key_ipv4_addrs v4addrs
         0 |       __be32 src
         4 |       __be32 dst
         0 |     struct flow_dissector_key_ipv6_addrs v6addrs
         0 |       struct in6_addr src
         0 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |           __u8[16] u6_addr8
         0 |           __be16[8] u6_addr16
         0 |           __be32[4] u6_addr32
        16 |       struct in6_addr dst
        16 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |           __u8[16] u6_addr8
        16 |           __be16[8] u6_addr16
        16 |           __be32[4] u6_addr32
         0 |     struct flow_dissector_key_tipc tipckey
         0 |       __be32 key
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct flow_keys
         0 |   struct flow_dissector_key_control control
         0 |     u16 thoff
         2 |     u16 addr_type
         4 |     u32 flags
         8 |   struct flow_dissector_key_basic basic
         8 |     __be16 n_proto
        10 |     u8 ip_proto
        11 |     u8 padding
        12 |   struct flow_dissector_key_tags tags
        12 |     u32 flow_label
        16 |   struct flow_dissector_key_vlan vlan
        16 |     union flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:69:2) 
        16 |       struct flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:70:3) 
   16:0-11 |         u16 vlan_id
    17:4-4 |         u16 vlan_dei
    17:5-7 |         u16 vlan_priority
        16 |       __be16 vlan_tci
        18 |     __be16 vlan_tpid
        20 |     __be16 vlan_eth_type
        22 |     u16 padding
        24 |   struct flow_dissector_key_vlan cvlan
        24 |     union flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:69:2) 
        24 |       struct flow_dissector_key_vlan::(anonymous at ../include/net/flow_dissector.h:70:3) 
   24:0-11 |         u16 vlan_id
    25:4-4 |         u16 vlan_dei
    25:5-7 |         u16 vlan_priority
        24 |       __be16 vlan_tci
        26 |     __be16 vlan_tpid
        28 |     __be16 vlan_eth_type
        30 |     u16 padding
        32 |   struct flow_dissector_key_keyid keyid
        32 |     __be32 keyid
        36 |   struct flow_dissector_key_ports ports
        36 |     union flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:187:2) 
        36 |       __be32 ports
        36 |       struct flow_dissector_key_ports::(anonymous at ../include/net/flow_dissector.h:189:3) 
        36 |         __be16 src
        38 |         __be16 dst
        40 |   struct flow_dissector_key_icmp icmp
        40 |     struct flow_dissector_key_icmp::(anonymous at ../include/net/flow_dissector.h:219:2) 
        40 |       u8 type
        41 |       u8 code
        42 |     u16 id
        44 |   struct flow_dissector_key_addrs addrs
        44 |     union flow_dissector_key_addrs::(anonymous at ../include/net/flow_dissector.h:157:2) 
        44 |       struct flow_dissector_key_ipv4_addrs v4addrs
        44 |         __be32 src
        48 |         __be32 dst
        44 |       struct flow_dissector_key_ipv6_addrs v6addrs
        44 |         struct in6_addr src
        44 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        44 |             __u8[16] u6_addr8
        44 |             __be16[8] u6_addr16
        44 |             __be32[4] u6_addr32
        60 |         struct in6_addr dst
        60 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        60 |             __u8[16] u6_addr8
        60 |             __be16[8] u6_addr16
        60 |             __be32[4] u6_addr32
        44 |       struct flow_dissector_key_tipc tipckey
        44 |         __be32 key
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct flow_keys_basic
         0 |   struct flow_dissector_key_control control
         0 |     u16 thoff
         2 |     u16 addr_type
         4 |     u32 flags
         8 |   struct flow_dissector_key_basic basic
         8 |     __be16 n_proto
        10 |     u8 ip_proto
        11 |     u8 padding
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union msghdr::(anonymous at ../include/linux/socket.h:69:2)
         0 |   void * msg_control
         0 |   void * msg_control_user
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct msghdr
         0 |   void * msg_name
         4 |   int msg_namelen
         8 |   int msg_inq
        16 |   struct iov_iter msg_iter
        16 |     u8 iter_type
        17 |     bool nofault
        18 |     bool data_source
        20 |     size_t iov_offset
        24 |     union iov_iter::(anonymous at ../include/linux/uio.h:56:2) 
        24 |       struct iovec __ubuf_iovec
        24 |         void * iov_base
        28 |         __kernel_size_t iov_len
        24 |       struct iov_iter::(anonymous at ../include/linux/uio.h:63:3) 
        24 |         union iov_iter::(anonymous at ../include/linux/uio.h:64:4) 
        24 |           const struct iovec * __iov
        24 |           const struct kvec * kvec
        24 |           const struct bio_vec * bvec
        24 |           struct xarray * xarray
        24 |           void * ubuf
        28 |         size_t count
        32 |     union iov_iter::(anonymous at ../include/linux/uio.h:75:2) 
        32 |       unsigned long nr_segs
        32 |       loff_t xarray_start
        40 |   union msghdr::(anonymous at ../include/linux/socket.h:69:2) 
        40 |     void * msg_control
        40 |     void * msg_control_user
    44:0-0 |   bool msg_control_is_user
    44:1-1 |   bool msg_get_inq
        48 |   unsigned int msg_flags
        52 |   __kernel_size_t msg_controllen
        56 |   struct kiocb * msg_iocb
        60 |   struct ubuf_info * msg_ubuf
        64 |   int (*)(struct sk_buff *, struct iov_iter *, size_t) sg_from_iter
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct sk_buff_head
         0 |   union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
         0 |     struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
         0 |       struct sk_buff * next
         4 |       struct sk_buff * prev
         0 |     struct sk_buff_list list
         0 |       struct sk_buff * next
         4 |       struct sk_buff * prev
         8 |   __u32 qlen
        12 |   struct spinlock lock
        12 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        12 |       struct raw_spinlock rlock
        12 |         arch_spinlock_t raw_lock
        12 |           volatile unsigned int slock
        16 |         unsigned int magic
        20 |         unsigned int owner_cpu
        24 |         void * owner
        28 |         struct lockdep_map dep_map
        28 |           struct lock_class_key * key
        32 |           struct lock_class *[2] class_cache
        40 |           const char * name
        44 |           u8 wait_type_outer
        45 |           u8 wait_type_inner
        46 |           u8 lock_type
        48 |           int cpu
        52 |           unsigned long ip
        12 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        12 |         u8[16] __padding
        28 |         struct lockdep_map dep_map
        28 |           struct lock_class_key * key
        32 |           struct lock_class *[2] class_cache
        40 |           const char * name
        44 |           u8 wait_type_outer
        45 |           u8 wait_type_inner
        46 |           u8 lock_type
        48 |           int cpu
        52 |           unsigned long ip
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct xsk_tx_metadata_compl
         0 |   __u64 * tx_timestamp
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union skb_shared_info::(anonymous at ../include/linux/skbuff.h:599:2)
         0 |   struct skb_shared_hwtstamps hwtstamps
         0 |     union skb_shared_hwtstamps::(anonymous at ../include/linux/skbuff.h:463:2) 
         0 |       ktime_t hwtstamp
         0 |       void * netdev_data
         0 |   struct xsk_tx_metadata_compl xsk_meta
         0 |     __u64 * tx_timestamp
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct skb_shared_info
         0 |   __u8 flags
         1 |   __u8 meta_len
         2 |   __u8 nr_frags
         3 |   __u8 tx_flags
         4 |   unsigned short gso_size
         6 |   unsigned short gso_segs
         8 |   struct sk_buff * frag_list
        16 |   union skb_shared_info::(anonymous at ../include/linux/skbuff.h:599:2) 
        16 |     struct skb_shared_hwtstamps hwtstamps
        16 |       union skb_shared_hwtstamps::(anonymous at ../include/linux/skbuff.h:463:2) 
        16 |         ktime_t hwtstamp
        16 |         void * netdev_data
        16 |     struct xsk_tx_metadata_compl xsk_meta
        16 |       __u64 * tx_timestamp
        24 |   unsigned int gso_type
        28 |   u32 tskey
        32 |   atomic_t dataref
        32 |     int counter
        36 |   unsigned int xdp_frags_size
        40 |   void * destructor_arg
        44 |   skb_frag_t[17] frags
           | [sizeof=248, align=8]

*** Dumping AST Record Layout
         0 | struct skb_ext
         0 |   struct refcount_struct refcnt
         0 |     atomic_t refs
         0 |       int counter
         4 |   u8[3] offset
         7 |   u8 chunks
         8 |   char[] data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union (anonymous at ../include/linux/sockptr.h:15:2)
         0 |   void * kernel
         0 |   void * user
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | sockptr_t
         0 |   union sockptr_t::(anonymous at ../include/linux/sockptr.h:15:2) 
         0 |     void * kernel
         0 |     void * user
     4:0-0 |   bool is_kernel
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct fd
         0 |   struct file * file
         4 |   unsigned int flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_lpm_trie_key_hdr
         0 |   __u32 prefixlen
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_lpm_trie_key_u8::(anonymous at ../include/uapi/linux/bpf.h:101:2)
         0 |   struct bpf_lpm_trie_key_hdr hdr
         0 |     __u32 prefixlen
         0 |   __u32 prefixlen
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_iter_link_info::(unnamed at ../include/uapi/linux/bpf.h:122:2)
         0 |   __u32 map_fd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1588:3)
         0 |   __u32 target_fd
         0 |   __u32 target_ifindex
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1628:3)
         0 |   __u32 start_id
         0 |   __u32 prog_id
         0 |   __u32 map_id
         0 |   __u32 btf_id
         0 |   __u32 link_id
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1646:3)
         0 |   __u32 target_fd
         0 |   __u32 target_ifindex
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1756:5)
         0 |   __u32 relative_fd
         0 |   __u32 relative_id
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1772:5)
         0 |   __u32 relative_fd
         0 |   __u32 relative_id
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1710:3)
         0 |   __u32 prog_fd
         0 |   __u32 map_fd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_attr::(anonymous at ../include/uapi/linux/bpf.h:1459:2)
         0 |   __u32 map_type
         4 |   __u32 key_size
         8 |   __u32 value_size
        12 |   __u32 max_entries
        16 |   __u32 map_flags
        20 |   __u32 inner_map_fd
        24 |   __u32 numa_node
        28 |   char[16] map_name
        44 |   __u32 map_ifindex
        48 |   __u32 btf_fd
        52 |   __u32 btf_key_type_id
        56 |   __u32 btf_value_type_id
        60 |   __u32 btf_vmlinux_value_type_id
        64 |   __u64 map_extra
        72 |   __s32 value_type_btf_obj_fd
        76 |   __s32 map_token_fd
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_sock_tuple::(unnamed at ../include/uapi/linux/bpf.h:6399:3)
         0 |   __be32 saddr
         4 |   __be32 daddr
         8 |   __be16 sport
        10 |   __be16 dport
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_sock_tuple::(unnamed at ../include/uapi/linux/bpf.h:6405:3)
         0 |   __be32[4] saddr
        16 |   __be32[4] daddr
        32 |   __be16 sport
        34 |   __be16 dport
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | union bpf_sock_tuple::(anonymous at ../include/uapi/linux/bpf.h:6398:2)
         0 |   struct bpf_sock_tuple::(unnamed at ../include/uapi/linux/bpf.h:6399:3) ipv4
         0 |     __be32 saddr
         4 |     __be32 daddr
         8 |     __be16 sport
        10 |     __be16 dport
         0 |   struct bpf_sock_tuple::(unnamed at ../include/uapi/linux/bpf.h:6405:3) ipv6
         0 |     __be32[4] saddr
        16 |     __be32[4] daddr
        32 |     __be16 sport
        34 |     __be16 dport
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | union sk_msg_md::(anonymous at ../include/uapi/linux/bpf.h:6495:2)
         0 |   void * data
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union sk_reuseport_md::(anonymous at ../include/uapi/linux/bpf.h:6515:2)
         0 |   void * data
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_link_info::(unnamed at ../include/uapi/linux/bpf.h:6648:5)
         0 |   __u32 map_id
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_link_info::(unnamed at ../include/uapi/linux/bpf.h:6653:5)
         0 |   __u64 cgroup_id
         8 |   __u32 order
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_link_info::(unnamed at ../include/uapi/linux/bpf.h:6700:5)
         0 |   __u64 file_name
         8 |   __u32 name_len
        12 |   __u32 offset
        16 |   __u64 cookie
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_link_info::(unnamed at ../include/uapi/linux/bpf.h:6626:3)
         0 |   __u64 tp_name
         8 |   __u32 tp_name_len
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_fib_lookup::(anonymous at ../include/uapi/linux/bpf.h:7208:3)
         0 |   __be16 h_vlan_proto
         2 |   __be16 h_vlan_TCI
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct bpf_fib_lookup::(anonymous at ../include/uapi/linux/bpf.h:7222:3)
         0 |   __u32 mark
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_flow_keys::(anonymous at ../include/uapi/linux/bpf.h:7283:3)
         0 |   __be32 ipv4_src
         4 |   __be32 ipv4_dst
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union bpf_sockopt::(anonymous at ../include/uapi/linux/bpf.h:7357:2)
         0 |   struct bpf_sock * sk
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union bpf_sk_lookup::(anonymous at ../include/uapi/linux/bpf.h:7375:3)
         0 |   struct bpf_sock * sk
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union bpf_sk_lookup::(anonymous at ../include/uapi/linux/bpf.h:7374:2)
         0 |   union bpf_sk_lookup::(anonymous at ../include/uapi/linux/bpf.h:7375:3) 
         0 |     struct bpf_sock * sk
    0:0-63 |     __u64 
         0 |   __u64 cookie
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct btf_id_set8::(unnamed at ../include/linux/btf_ids.h:19:2)
         0 |   u32 id
         4 |   u32 flags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct btf_struct_meta
         0 |   u32 btf_id
         4 |   struct btf_record * record
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union btf_type::(anonymous at ../include/uapi/linux/btf.h:49:2)
         0 |   __u32 size
         0 |   __u32 type
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct btf_type
         0 |   __u32 name_off
         4 |   __u32 info
         8 |   union btf_type::(anonymous at ../include/uapi/linux/btf.h:49:2) 
         8 |     __u32 size
         8 |     __u32 type
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct btf_member
         0 |   __u32 name_off
         4 |   __u32 type
         8 |   __u32 offset
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct btf_field_desc
         0 |   int t_off_cnt
         4 |   int[2] t_offs
        12 |   int m_sz
        16 |   int m_off_cnt
        20 |   int[1] m_offs
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | class_cpus_read_lock_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct seq_file
         0 |   char * buf
         4 |   size_t size
         8 |   size_t from
        12 |   size_t count
        16 |   size_t pad_until
        24 |   loff_t index
        32 |   loff_t read_pos
        40 |   struct mutex lock
        40 |     atomic_t owner
        40 |       int counter
        44 |     struct raw_spinlock wait_lock
        44 |       arch_spinlock_t raw_lock
        44 |         volatile unsigned int slock
        48 |       unsigned int magic
        52 |       unsigned int owner_cpu
        56 |       void * owner
        60 |       struct lockdep_map dep_map
        60 |         struct lock_class_key * key
        64 |         struct lock_class *[2] class_cache
        72 |         const char * name
        76 |         u8 wait_type_outer
        77 |         u8 wait_type_inner
        78 |         u8 lock_type
        80 |         int cpu
        84 |         unsigned long ip
        88 |     struct list_head wait_list
        88 |       struct list_head * next
        92 |       struct list_head * prev
        96 |     void * magic
       100 |     struct lockdep_map dep_map
       100 |       struct lock_class_key * key
       104 |       struct lock_class *[2] class_cache
       112 |       const char * name
       116 |       u8 wait_type_outer
       117 |       u8 wait_type_inner
       118 |       u8 lock_type
       120 |       int cpu
       124 |       unsigned long ip
       128 |   const struct seq_operations * op
       132 |   int poll_event
       136 |   const struct file * file
       140 |   void * private
           | [sizeof=144, align=8]

*** Dumping AST Record Layout
         0 | struct nsproxy
         0 |   struct refcount_struct count
         0 |     atomic_t refs
         0 |       int counter
         4 |   struct uts_namespace * uts_ns
         8 |   struct ipc_namespace * ipc_ns
        12 |   struct mnt_namespace * mnt_ns
        16 |   struct pid_namespace * pid_ns_for_children
        20 |   struct net * net_ns
        24 |   struct time_namespace * time_ns
        28 |   struct time_namespace * time_ns_for_children
        32 |   struct cgroup_namespace * cgroup_ns
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct uid_gid_extent
         0 |   u32 first
         4 |   u32 lower_first
         8 |   u32 count
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3)
         0 |   struct uid_gid_extent * forward
         4 |   struct uid_gid_extent * reverse
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union uid_gid_map::(anonymous at ../include/linux/user_namespace.h:25:2)
         0 |   struct uid_gid_extent[5] extent
         0 |   struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3) 
         0 |     struct uid_gid_extent * forward
         4 |     struct uid_gid_extent * reverse
           | [sizeof=60, align=4]

*** Dumping AST Record Layout
         0 | struct uid_gid_map
         0 |   u32 nr_extents
         4 |   union uid_gid_map::(anonymous at ../include/linux/user_namespace.h:25:2) 
         4 |     struct uid_gid_extent[5] extent
         4 |     struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3) 
         4 |       struct uid_gid_extent * forward
         8 |       struct uid_gid_extent * reverse
           | [sizeof=64, align=4]

*** Dumping AST Record Layout
         0 | class_disable_irq_t
         0 |   int * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union tasklet_struct::(anonymous at ../include/linux/interrupt.h:655:2)
         0 |   void (*)(unsigned long) func
         0 |   void (*)(struct tasklet_struct *) callback
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct tasklet_struct
         0 |   struct tasklet_struct * next
         4 |   unsigned long state
         8 |   atomic_t count
         8 |     int counter
        12 |   bool use_callback
        16 |   union tasklet_struct::(anonymous at ../include/linux/interrupt.h:655:2) 
        16 |     void (*)(unsigned long) func
        16 |     void (*)(struct tasklet_struct *) callback
        20 |   unsigned long data
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct kernel_stat
         0 |   unsigned long irqs_sum
         4 |   unsigned int[10] softirqs
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct kernel_cpustat
         0 |   u64[10] cpustat
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct u64_stats_sync
         0 |   struct seqcount seq
         0 |     unsigned int sequence
         4 |     struct lockdep_map dep_map
         4 |       struct lock_class_key * key
         8 |       struct lock_class *[2] class_cache
        16 |       const char * name
        20 |       u8 wait_type_outer
        21 |       u8 wait_type_inner
        22 |       u8 lock_type
        24 |       int cpu
        28 |       unsigned long ip
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct kthread_work
         0 |   struct list_head node
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   kthread_work_func_t func
        12 |   struct kthread_worker * worker
        16 |   int canceling
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ns_common
         0 |   struct dentry * stashed
         4 |   const struct proc_ns_operations * ops
         8 |   unsigned int inum
        12 |   struct refcount_struct count
        12 |     atomic_t refs
        12 |       int counter
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct cgroup_namespace
         0 |   struct ns_common ns
         0 |     struct dentry * stashed
         4 |     const struct proc_ns_operations * ops
         8 |     unsigned int inum
        12 |     struct refcount_struct count
        12 |       atomic_t refs
        12 |         int counter
        16 |   struct user_namespace * user_ns
        20 |   struct ucounts * ucounts
        24 |   struct css_set * root_cset
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct page_counter
         0 |   atomic_t usage
         0 |     int counter
         4 |   unsigned long emin
         8 |   atomic_t min_usage
         8 |     int counter
        12 |   atomic_t children_min_usage
        12 |     int counter
        16 |   unsigned long elow
        20 |   atomic_t low_usage
        20 |     int counter
        24 |   atomic_t children_low_usage
        24 |     int counter
        28 |   unsigned long watermark
        32 |   unsigned long failcnt
        36 |   unsigned long min
        40 |   unsigned long low
        44 |   unsigned long high
        48 |   unsigned long max
        52 |   struct page_counter * parent
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct fprop_global
         0 |   struct percpu_counter events
         0 |     s64 count
         8 |   unsigned int period
        12 |   struct seqcount sequence
        12 |     unsigned int sequence
        16 |     struct lockdep_map dep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct fprop_local_percpu
         0 |   struct percpu_counter events
         0 |     s64 count
         8 |   unsigned int period
        12 |   struct raw_spinlock lock
        12 |     arch_spinlock_t raw_lock
        12 |       volatile unsigned int slock
        16 |     unsigned int magic
        20 |     unsigned int owner_cpu
        24 |     void * owner
        28 |     struct lockdep_map dep_map
        28 |       struct lock_class_key * key
        32 |       struct lock_class *[2] class_cache
        40 |       const char * name
        44 |       u8 wait_type_outer
        45 |       u8 wait_type_inner
        46 |       u8 lock_type
        48 |       int cpu
        52 |       unsigned long ip
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct folio_batch
         0 |   unsigned char nr
         1 |   unsigned char i
         2 |   bool percpu_pvec_drained
         4 |   struct folio *[31] folios
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct btf_field_kptr
         0 |   struct btf * btf
         4 |   struct module * module
         8 |   btf_dtor_kfunc_t dtor
        12 |   u32 btf_id
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct btf_field_graph_root
         0 |   struct btf * btf
         4 |   u32 value_btf_id
         8 |   u32 node_offset
        12 |   struct btf_record * value_rec
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union btf_field::(anonymous at ../include/linux/bpf.h:231:2)
         0 |   struct btf_field_kptr kptr
         0 |     struct btf * btf
         4 |     struct module * module
         8 |     btf_dtor_kfunc_t dtor
        12 |     u32 btf_id
         0 |   struct btf_field_graph_root graph_root
         0 |     struct btf * btf
         4 |     u32 value_btf_id
         8 |     u32 node_offset
        12 |     struct btf_record * value_rec
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct btf_field
         0 |   u32 offset
         4 |   u32 size
         8 |   enum btf_field_type type
        12 |   union btf_field::(anonymous at ../include/linux/bpf.h:231:2) 
        12 |     struct btf_field_kptr kptr
        12 |       struct btf * btf
        16 |       struct module * module
        20 |       btf_dtor_kfunc_t dtor
        24 |       u32 btf_id
        12 |     struct btf_field_graph_root graph_root
        12 |       struct btf * btf
        16 |       u32 value_btf_id
        20 |       u32 node_offset
        24 |       struct btf_record * value_rec
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct btf_record
         0 |   u32 cnt
         4 |   u32 field_mask
         8 |   int spin_lock_off
        12 |   int timer_off
        16 |   int wq_off
        20 |   int refcount_off
        24 |   struct btf_field[] fields
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | union bpf_map::(anonymous at ../include/linux/bpf.h:286:2)
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_map::(unnamed at ../include/linux/bpf.h:296:2)
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   enum bpf_prog_type type
        48 |   bool jited
        49 |   bool xdp_has_frags
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_map
         0 |   const struct bpf_map_ops * ops
         4 |   struct bpf_map * inner_map_meta
         8 |   enum bpf_map_type map_type
        12 |   u32 key_size
        16 |   u32 value_size
        20 |   u32 max_entries
        24 |   u64 map_extra
        32 |   u32 map_flags
        36 |   u32 id
        40 |   struct btf_record * record
        44 |   int numa_node
        48 |   u32 btf_key_type_id
        52 |   u32 btf_value_type_id
        56 |   u32 btf_vmlinux_value_type_id
        60 |   struct btf * btf
        64 |   char[16] name
        80 |   struct mutex freeze_mutex
        80 |     atomic_t owner
        80 |       int counter
        84 |     struct raw_spinlock wait_lock
        84 |       arch_spinlock_t raw_lock
        84 |         volatile unsigned int slock
        88 |       unsigned int magic
        92 |       unsigned int owner_cpu
        96 |       void * owner
       100 |       struct lockdep_map dep_map
       100 |         struct lock_class_key * key
       104 |         struct lock_class *[2] class_cache
       112 |         const char * name
       116 |         u8 wait_type_outer
       117 |         u8 wait_type_inner
       118 |         u8 lock_type
       120 |         int cpu
       124 |         unsigned long ip
       128 |     struct list_head wait_list
       128 |       struct list_head * next
       132 |       struct list_head * prev
       136 |     void * magic
       140 |     struct lockdep_map dep_map
       140 |       struct lock_class_key * key
       144 |       struct lock_class *[2] class_cache
       152 |       const char * name
       156 |       u8 wait_type_outer
       157 |       u8 wait_type_inner
       158 |       u8 lock_type
       160 |       int cpu
       164 |       unsigned long ip
       168 |   atomic64_t refcnt
       168 |     s64 counter
       176 |   atomic64_t usercnt
       176 |     s64 counter
       184 |   union bpf_map::(anonymous at ../include/linux/bpf.h:286:2) 
       184 |     struct work_struct work
       184 |       atomic_t data
       184 |         int counter
       188 |       struct list_head entry
       188 |         struct list_head * next
       192 |         struct list_head * prev
       196 |       work_func_t func
       200 |       struct lockdep_map lockdep_map
       200 |         struct lock_class_key * key
       204 |         struct lock_class *[2] class_cache
       212 |         const char * name
       216 |         u8 wait_type_outer
       217 |         u8 wait_type_inner
       218 |         u8 lock_type
       220 |         int cpu
       224 |         unsigned long ip
       184 |     struct callback_head rcu
       184 |       struct callback_head * next
       188 |       void (*)(struct callback_head *) func
       232 |   atomic64_t writecnt
       232 |     s64 counter
       240 |   struct bpf_map::(unnamed at ../include/linux/bpf.h:296:2) owner
       240 |     struct spinlock lock
       240 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       240 |         struct raw_spinlock rlock
       240 |           arch_spinlock_t raw_lock
       240 |             volatile unsigned int slock
       244 |           unsigned int magic
       248 |           unsigned int owner_cpu
       252 |           void * owner
       256 |           struct lockdep_map dep_map
       256 |             struct lock_class_key * key
       260 |             struct lock_class *[2] class_cache
       268 |             const char * name
       272 |             u8 wait_type_outer
       273 |             u8 wait_type_inner
       274 |             u8 lock_type
       276 |             int cpu
       280 |             unsigned long ip
       240 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       240 |           u8[16] __padding
       256 |           struct lockdep_map dep_map
       256 |             struct lock_class_key * key
       260 |             struct lock_class *[2] class_cache
       268 |             const char * name
       272 |             u8 wait_type_outer
       273 |             u8 wait_type_inner
       274 |             u8 lock_type
       276 |             int cpu
       280 |             unsigned long ip
       284 |     enum bpf_prog_type type
       288 |     bool jited
       289 |     bool xdp_has_frags
       292 |   bool bypass_spec_v1
       293 |   bool frozen
       294 |   bool free_after_mult_rcu_gp
       295 |   bool free_after_rcu_gp
       296 |   atomic64_t sleepable_refcnt
       296 |     s64 counter
       304 |   s64 * elem_count
           | [sizeof=312, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_offloaded_map
         0 |   struct bpf_map map
         0 |     const struct bpf_map_ops * ops
         4 |     struct bpf_map * inner_map_meta
         8 |     enum bpf_map_type map_type
        12 |     u32 key_size
        16 |     u32 value_size
        20 |     u32 max_entries
        24 |     u64 map_extra
        32 |     u32 map_flags
        36 |     u32 id
        40 |     struct btf_record * record
        44 |     int numa_node
        48 |     u32 btf_key_type_id
        52 |     u32 btf_value_type_id
        56 |     u32 btf_vmlinux_value_type_id
        60 |     struct btf * btf
        64 |     char[16] name
        80 |     struct mutex freeze_mutex
        80 |       atomic_t owner
        80 |         int counter
        84 |       struct raw_spinlock wait_lock
        84 |         arch_spinlock_t raw_lock
        84 |           volatile unsigned int slock
        88 |         unsigned int magic
        92 |         unsigned int owner_cpu
        96 |         void * owner
       100 |         struct lockdep_map dep_map
       100 |           struct lock_class_key * key
       104 |           struct lock_class *[2] class_cache
       112 |           const char * name
       116 |           u8 wait_type_outer
       117 |           u8 wait_type_inner
       118 |           u8 lock_type
       120 |           int cpu
       124 |           unsigned long ip
       128 |       struct list_head wait_list
       128 |         struct list_head * next
       132 |         struct list_head * prev
       136 |       void * magic
       140 |       struct lockdep_map dep_map
       140 |         struct lock_class_key * key
       144 |         struct lock_class *[2] class_cache
       152 |         const char * name
       156 |         u8 wait_type_outer
       157 |         u8 wait_type_inner
       158 |         u8 lock_type
       160 |         int cpu
       164 |         unsigned long ip
       168 |     atomic64_t refcnt
       168 |       s64 counter
       176 |     atomic64_t usercnt
       176 |       s64 counter
       184 |     union bpf_map::(anonymous at ../include/linux/bpf.h:286:2) 
       184 |       struct work_struct work
       184 |         atomic_t data
       184 |           int counter
       188 |         struct list_head entry
       188 |           struct list_head * next
       192 |           struct list_head * prev
       196 |         work_func_t func
       200 |         struct lockdep_map lockdep_map
       200 |           struct lock_class_key * key
       204 |           struct lock_class *[2] class_cache
       212 |           const char * name
       216 |           u8 wait_type_outer
       217 |           u8 wait_type_inner
       218 |           u8 lock_type
       220 |           int cpu
       224 |           unsigned long ip
       184 |       struct callback_head rcu
       184 |         struct callback_head * next
       188 |         void (*)(struct callback_head *) func
       232 |     atomic64_t writecnt
       232 |       s64 counter
       240 |     struct bpf_map::(unnamed at ../include/linux/bpf.h:296:2) owner
       240 |       struct spinlock lock
       240 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       240 |           struct raw_spinlock rlock
       240 |             arch_spinlock_t raw_lock
       240 |               volatile unsigned int slock
       244 |             unsigned int magic
       248 |             unsigned int owner_cpu
       252 |             void * owner
       256 |             struct lockdep_map dep_map
       256 |               struct lock_class_key * key
       260 |               struct lock_class *[2] class_cache
       268 |               const char * name
       272 |               u8 wait_type_outer
       273 |               u8 wait_type_inner
       274 |               u8 lock_type
       276 |               int cpu
       280 |               unsigned long ip
       240 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       240 |             u8[16] __padding
       256 |             struct lockdep_map dep_map
       256 |               struct lock_class_key * key
       260 |               struct lock_class *[2] class_cache
       268 |               const char * name
       272 |               u8 wait_type_outer
       273 |               u8 wait_type_inner
       274 |               u8 lock_type
       276 |               int cpu
       280 |               unsigned long ip
       284 |       enum bpf_prog_type type
       288 |       bool jited
       289 |       bool xdp_has_frags
       292 |     bool bypass_spec_v1
       293 |     bool frozen
       294 |     bool free_after_mult_rcu_gp
       295 |     bool free_after_rcu_gp
       296 |     atomic64_t sleepable_refcnt
       296 |       s64 counter
       304 |     s64 * elem_count
       312 |   struct net_device * netdev
       316 |   const struct bpf_map_dev_ops * dev_ops
       320 |   void * dev_priv
       324 |   struct list_head offloads
       324 |     struct list_head * next
       328 |     struct list_head * prev
           | [sizeof=336, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_func_proto::(anonymous at ../include/linux/bpf.h:812:3)
         0 |   enum bpf_arg_type arg1_type
         4 |   enum bpf_arg_type arg2_type
         8 |   enum bpf_arg_type arg3_type
        12 |   enum bpf_arg_type arg4_type
        16 |   enum bpf_arg_type arg5_type
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_func_proto::(anonymous at ../include/linux/bpf.h:822:3)
         0 |   u32 * arg1_btf_id
         4 |   u32 * arg2_btf_id
         8 |   u32 * arg3_btf_id
        12 |   u32 * arg4_btf_id
        16 |   u32 * arg5_btf_id
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_insn
         0 |   __u8 code
     1:0-3 |   __u8 dst_reg
     1:4-7 |   __u8 src_reg
         2 |   __s16 off
         4 |   __s32 imm
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct btf_func_model
         0 |   u8 ret_size
         1 |   u8 ret_flags
         2 |   u8 nr_args
         3 |   u8[12] arg_size
        15 |   u8[12] arg_flags
           | [sizeof=27, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_dispatcher_prog
         0 |   struct bpf_prog * prog
         4 |   struct refcount_struct users
         4 |     atomic_t refs
         4 |       int counter
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_jit_poke_descriptor::(unnamed at ../include/linux/bpf.h:1413:3)
         0 |   struct bpf_map * map
         4 |   u32 key
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct sock_filter
         0 |   __u16 code
         2 |   __u8 jt
         3 |   __u8 jf
         4 |   __u32 k
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_prog::(unnamed at ../include/linux/bpf.h:1568:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_prog::(unnamed at ../include/linux/bpf.h:1569:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_prog::(anonymous at ../include/linux/bpf.h:1568:3)
         0 |   struct bpf_prog::(unnamed at ../include/linux/bpf.h:1568:3) __empty_insns
         0 |   struct sock_filter[] insns
           | [sizeof=0, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_prog::(anonymous at ../include/linux/bpf.h:1569:3)
         0 |   struct bpf_prog::(unnamed at ../include/linux/bpf.h:1569:3) __empty_insnsi
         0 |   struct bpf_insn[] insnsi
           | [sizeof=0, align=4]

*** Dumping AST Record Layout
         0 | union bpf_link::(anonymous at ../include/linux/bpf.h:1590:2)
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_link
         0 |   atomic64_t refcnt
         0 |     s64 counter
         8 |   u32 id
        12 |   enum bpf_link_type type
        16 |   const struct bpf_link_ops * ops
        20 |   struct bpf_prog * prog
        24 |   union bpf_link::(anonymous at ../include/linux/bpf.h:1590:2) 
        24 |     struct callback_head rcu
        24 |       struct callback_head * next
        28 |       void (*)(struct callback_head *) func
        24 |     struct work_struct work
        24 |       atomic_t data
        24 |         int counter
        28 |       struct list_head entry
        28 |         struct list_head * next
        32 |         struct list_head * prev
        36 |       work_func_t func
        40 |       struct lockdep_map lockdep_map
        40 |         struct lock_class_key * key
        44 |         struct lock_class *[2] class_cache
        52 |         const char * name
        56 |         u8 wait_type_outer
        57 |         u8 wait_type_inner
        58 |         u8 lock_type
        60 |         int cpu
        64 |         unsigned long ip
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_tramp_link
         0 |   struct bpf_link link
         0 |     atomic64_t refcnt
         0 |       s64 counter
         8 |     u32 id
        12 |     enum bpf_link_type type
        16 |     const struct bpf_link_ops * ops
        20 |     struct bpf_prog * prog
        24 |     union bpf_link::(anonymous at ../include/linux/bpf.h:1590:2) 
        24 |       struct callback_head rcu
        24 |         struct callback_head * next
        28 |         void (*)(struct callback_head *) func
        24 |       struct work_struct work
        24 |         atomic_t data
        24 |           int counter
        28 |         struct list_head entry
        28 |           struct list_head * next
        32 |           struct list_head * prev
        36 |         work_func_t func
        40 |         struct lockdep_map lockdep_map
        40 |           struct lock_class_key * key
        44 |           struct lock_class *[2] class_cache
        52 |           const char * name
        56 |           u8 wait_type_outer
        57 |           u8 wait_type_inner
        58 |           u8 lock_type
        60 |           int cpu
        64 |           unsigned long ip
        72 |   struct hlist_node tramp_hlist
        72 |     struct hlist_node * next
        76 |     struct hlist_node ** pprev
        80 |   u64 cookie
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_array::(unnamed at ../include/linux/bpf.h:1895:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_array::(unnamed at ../include/linux/bpf.h:1896:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_array::(unnamed at ../include/linux/bpf.h:1897:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_array::(anonymous at ../include/linux/bpf.h:1895:3)
         0 |   struct bpf_array::(unnamed at ../include/linux/bpf.h:1895:3) __empty_value
         0 |   char[] value
           | [sizeof=0, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_array::(anonymous at ../include/linux/bpf.h:1896:3)
         0 |   struct bpf_array::(unnamed at ../include/linux/bpf.h:1896:3) __empty_ptrs
         0 |   void *[] ptrs
           | [sizeof=0, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_array::(anonymous at ../include/linux/bpf.h:1897:3)
         0 |   struct bpf_array::(unnamed at ../include/linux/bpf.h:1897:3) __empty_pptrs
         0 |   void *[] pptrs
           | [sizeof=0, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_prog_array::(anonymous at ../include/linux/bpf.h:1997:2)
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union bpf_prog_array_item::(anonymous at ../include/linux/bpf.h:1990:2)
         0 |   struct bpf_cgroup_storage *[2] cgroup_storage
         0 |   u64 bpf_cookie
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_prog_array_item
         0 |   struct bpf_prog * prog
         8 |   union bpf_prog_array_item::(anonymous at ../include/linux/bpf.h:1990:2) 
         8 |     struct bpf_cgroup_storage *[2] cgroup_storage
         8 |     u64 bpf_cookie
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_prog_array_hdr
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union bpf_prog_array::(anonymous at ../include/linux/bpf.h:1997:2)
         0 |   struct bpf_prog_array::(anonymous at ../include/linux/bpf.h:1997:2) 
         0 |     struct callback_head rcu
         0 |       struct callback_head * next
         4 |       void (*)(struct callback_head *) func
         0 |   struct bpf_prog_array_hdr hdr
         0 |     struct callback_head rcu
         0 |       struct callback_head * next
         4 |       void (*)(struct callback_head *) func
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_run_ctx
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct bpf_trace_run_ctx
         0 |   struct bpf_run_ctx run_ctx
         0 |   u64 bpf_cookie
         8 |   bool is_uprobe
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | union bpf_prog::(anonymous at ../include/linux/bpf.h:1567:2)
         0 |   struct bpf_prog::(anonymous at ../include/linux/bpf.h:1568:3) 
         0 |     struct bpf_prog::(unnamed at ../include/linux/bpf.h:1568:3) __empty_insns
         0 |     struct sock_filter[] insns
         0 |   struct bpf_prog::(anonymous at ../include/linux/bpf.h:1569:3) 
         0 |     struct bpf_prog::(unnamed at ../include/linux/bpf.h:1569:3) __empty_insnsi
         0 |     struct bpf_insn[] insnsi
           | [sizeof=0, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_prog
         0 |   u16 pages
     2:0-0 |   u16 jited
     2:1-1 |   u16 jit_requested
     2:2-2 |   u16 gpl_compatible
     2:3-3 |   u16 cb_access
     2:4-4 |   u16 dst_needed
     2:5-5 |   u16 blinding_requested
     2:6-6 |   u16 blinded
     2:7-7 |   u16 is_func
     3:0-0 |   u16 kprobe_override
     3:1-1 |   u16 has_callchain_buf
     3:2-2 |   u16 enforce_expected_attach_type
     3:3-3 |   u16 call_get_stack
     3:4-4 |   u16 call_get_func_ip
     3:5-5 |   u16 tstamp_type_access
     3:6-6 |   u16 sleepable
         4 |   enum bpf_prog_type type
         8 |   enum bpf_attach_type expected_attach_type
        12 |   u32 len
        16 |   u32 jited_len
        20 |   u8[8] tag
        28 |   struct bpf_prog_stats * stats
        32 |   int * active
        36 |   unsigned int (*)(const void *, const struct bpf_insn *) bpf_func
        40 |   struct bpf_prog_aux * aux
        44 |   struct sock_fprog_kern * orig_prog
        48 |   union bpf_prog::(anonymous at ../include/linux/bpf.h:1567:2) 
        48 |     struct bpf_prog::(anonymous at ../include/linux/bpf.h:1568:3) 
        48 |       struct bpf_prog::(unnamed at ../include/linux/bpf.h:1568:3) __empty_insns
        48 |       struct sock_filter[] insns
        48 |     struct bpf_prog::(anonymous at ../include/linux/bpf.h:1569:3) 
        48 |       struct bpf_prog::(unnamed at ../include/linux/bpf.h:1569:3) __empty_insnsi
        48 |       struct bpf_insn[] insnsi
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_token
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        48 |   atomic64_t refcnt
        48 |     s64 counter
        56 |   struct user_namespace * userns
        64 |   u64 allowed_cmds
        72 |   u64 allowed_maps
        80 |   u64 allowed_progs
        88 |   u64 allowed_attachs
           | [sizeof=96, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_ctx_arg_aux
         0 |   u32 offset
         4 |   enum bpf_reg_type reg_type
         8 |   struct btf * btf
        12 |   u32 btf_id
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union bpf_iter_meta::(anonymous at ../include/linux/bpf.h:2437:2)
         0 |   struct seq_file * seq
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union bpf_iter__bpf_map_elem::(anonymous at ../include/linux/bpf.h:2443:2)
         0 |   struct bpf_iter_meta * meta
    0:0-63 |   __u64 
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct bpf_insn_access_aux::(anonymous at ../include/linux/bpf.h:924:3)
         0 |   struct btf * btf
         4 |   u32 btf_id
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union bpf_insn_access_aux::(anonymous at ../include/linux/bpf.h:922:2)
         0 |   int ctx_field_size
         0 |   struct bpf_insn_access_aux::(anonymous at ../include/linux/bpf.h:924:3) 
         0 |     struct btf * btf
         4 |     u32 btf_id
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct bpf_insn_access_aux
         0 |   enum bpf_reg_type reg_type
         4 |   union bpf_insn_access_aux::(anonymous at ../include/linux/bpf.h:922:2) 
         4 |     int ctx_field_size
         4 |     struct bpf_insn_access_aux::(anonymous at ../include/linux/bpf.h:924:3) 
         4 |       struct btf * btf
         8 |       u32 btf_id
        12 |   struct bpf_verifier_log * log
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct user_namespace
         0 |   struct uid_gid_map uid_map
         0 |     u32 nr_extents
         4 |     union uid_gid_map::(anonymous at ../include/linux/user_namespace.h:25:2) 
         4 |       struct uid_gid_extent[5] extent
         4 |       struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3) 
         4 |         struct uid_gid_extent * forward
         8 |         struct uid_gid_extent * reverse
        64 |   struct uid_gid_map gid_map
        64 |     u32 nr_extents
        68 |     union uid_gid_map::(anonymous at ../include/linux/user_namespace.h:25:2) 
        68 |       struct uid_gid_extent[5] extent
        68 |       struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3) 
        68 |         struct uid_gid_extent * forward
        72 |         struct uid_gid_extent * reverse
       128 |   struct uid_gid_map projid_map
       128 |     u32 nr_extents
       132 |     union uid_gid_map::(anonymous at ../include/linux/user_namespace.h:25:2) 
       132 |       struct uid_gid_extent[5] extent
       132 |       struct uid_gid_map::(anonymous at ../include/linux/user_namespace.h:27:3) 
       132 |         struct uid_gid_extent * forward
       136 |         struct uid_gid_extent * reverse
       192 |   struct user_namespace * parent
       196 |   int level
       200 |   kuid_t owner
       200 |     uid_t val
       204 |   kgid_t group
       204 |     gid_t val
       208 |   struct ns_common ns
       208 |     struct dentry * stashed
       212 |     const struct proc_ns_operations * ops
       216 |     unsigned int inum
       220 |     struct refcount_struct count
       220 |       atomic_t refs
       220 |         int counter
       224 |   unsigned long flags
       228 |   bool parent_could_setfcap
       232 |   struct list_head keyring_name_list
       232 |     struct list_head * next
       236 |     struct list_head * prev
       240 |   struct key * user_keyring_register
       244 |   struct rw_semaphore keyring_sem
       244 |     atomic_t count
       244 |       int counter
       248 |     atomic_t owner
       248 |       int counter
       252 |     struct raw_spinlock wait_lock
       252 |       arch_spinlock_t raw_lock
       252 |         volatile unsigned int slock
       256 |       unsigned int magic
       260 |       unsigned int owner_cpu
       264 |       void * owner
       268 |       struct lockdep_map dep_map
       268 |         struct lock_class_key * key
       272 |         struct lock_class *[2] class_cache
       280 |         const char * name
       284 |         u8 wait_type_outer
       285 |         u8 wait_type_inner
       286 |         u8 lock_type
       288 |         int cpu
       292 |         unsigned long ip
       296 |     struct list_head wait_list
       296 |       struct list_head * next
       300 |       struct list_head * prev
       304 |     void * magic
       308 |     struct lockdep_map dep_map
       308 |       struct lock_class_key * key
       312 |       struct lock_class *[2] class_cache
       320 |       const char * name
       324 |       u8 wait_type_outer
       325 |       u8 wait_type_inner
       326 |       u8 lock_type
       328 |       int cpu
       332 |       unsigned long ip
       336 |   struct key * persistent_keyring_register
       340 |   struct work_struct work
       340 |     atomic_t data
       340 |       int counter
       344 |     struct list_head entry
       344 |       struct list_head * next
       348 |       struct list_head * prev
       352 |     work_func_t func
       356 |     struct lockdep_map lockdep_map
       356 |       struct lock_class_key * key
       360 |       struct lock_class *[2] class_cache
       368 |       const char * name
       372 |       u8 wait_type_outer
       373 |       u8 wait_type_inner
       374 |       u8 lock_type
       376 |       int cpu
       380 |       unsigned long ip
       384 |   struct ctl_table_set set
       384 |     int (*)(struct ctl_table_set *) is_seen
       388 |     struct ctl_dir dir
       388 |       struct ctl_table_header header
       388 |         union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2) 
       388 |           struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
       388 |             struct ctl_table * ctl_table
       392 |             int ctl_table_size
       396 |             int used
       400 |             int count
       404 |             int nreg
       388 |           struct callback_head rcu
       388 |             struct callback_head * next
       392 |             void (*)(struct callback_head *) func
       408 |         struct completion * unregistering
       412 |         const struct ctl_table * ctl_table_arg
       416 |         struct ctl_table_root * root
       420 |         struct ctl_table_set * set
       424 |         struct ctl_dir * parent
       428 |         struct ctl_node * node
       432 |         struct hlist_head inodes
       432 |           struct hlist_node * first
       436 |         enum (unnamed enum at ../include/linux/sysctl.h:187:2) type
       440 |       struct rb_root root
       440 |         struct rb_node * rb_node
       444 |   struct ctl_table_header * sysctls
       448 |   struct ucounts * ucounts
       452 |   long[10] ucount_max
       492 |   long[4] rlimit_max
           | [sizeof=508, align=4]

*** Dumping AST Record Layout
         0 | struct timezone
         0 |   int tz_minuteswest
         4 |   int tz_dsttime
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct compat_msghdr
         0 |   compat_uptr_t msg_name
         4 |   compat_int_t msg_namelen
         8 |   compat_uptr_t msg_iov
        12 |   compat_size_t msg_iovlen
        16 |   compat_uptr_t msg_control
        20 |   compat_size_t msg_controllen
        24 |   compat_uint_t msg_flags
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct __kernel_sockaddr_storage
         0 |   union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         0 |     struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         0 |       __kernel_sa_family_t ss_family
         2 |       char[126] __data
         0 |     void * __align
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct compat_group_filter::(anonymous at ../include/net/compat.h:74:3)
         0 |   __u32 gf_interface_aux
         4 |   struct __kernel_sockaddr_storage gf_group_aux
         4 |     union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |       struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |         __kernel_sa_family_t ss_family
         6 |         char[126] __data
         4 |       void * __align
       132 |   __u32 gf_fmode_aux
       136 |   __u32 gf_numsrc_aux
       140 |   struct __kernel_sockaddr_storage[1] gf_slist
           | [sizeof=268, align=4]

*** Dumping AST Record Layout
         0 | struct compat_group_filter::(anonymous at ../include/net/compat.h:83:3)
         0 |   __u32 gf_interface
         4 |   struct __kernel_sockaddr_storage gf_group
         4 |     union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |       struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |         __kernel_sa_family_t ss_family
         6 |         char[126] __data
         4 |       void * __align
       132 |   __u32 gf_fmode
       136 |   __u32 gf_numsrc
       140 |   struct __kernel_sockaddr_storage[] gf_slist_flex
           | [sizeof=140, align=4]

*** Dumping AST Record Layout
         0 | union compat_group_filter::(anonymous at ../include/net/compat.h:73:2)
         0 |   struct compat_group_filter::(anonymous at ../include/net/compat.h:74:3) 
         0 |     __u32 gf_interface_aux
         4 |     struct __kernel_sockaddr_storage gf_group_aux
         4 |       union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |         struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |           __kernel_sa_family_t ss_family
         6 |           char[126] __data
         4 |         void * __align
       132 |     __u32 gf_fmode_aux
       136 |     __u32 gf_numsrc_aux
       140 |     struct __kernel_sockaddr_storage[1] gf_slist
         0 |   struct compat_group_filter::(anonymous at ../include/net/compat.h:83:3) 
         0 |     __u32 gf_interface
         4 |     struct __kernel_sockaddr_storage gf_group
         4 |       union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |         struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |           __kernel_sa_family_t ss_family
         6 |           char[126] __data
         4 |         void * __align
       132 |     __u32 gf_fmode
       136 |     __u32 gf_numsrc
       140 |     struct __kernel_sockaddr_storage[] gf_slist_flex
           | [sizeof=268, align=4]

*** Dumping AST Record Layout
         0 | struct scm_creds
         0 |   u32 pid
         4 |   kuid_t uid
         4 |     uid_t val
         8 |   kgid_t gid
         8 |     gid_t val
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct scm_cookie
         0 |   struct pid * pid
         4 |   struct scm_fp_list * fp
         8 |   struct scm_creds creds
         8 |     u32 pid
        12 |     kuid_t uid
        12 |       uid_t val
        16 |     kgid_t gid
        16 |       gid_t val
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct socket_wq
         0 |   struct wait_queue_head wait
         0 |     struct spinlock lock
         0 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |         struct raw_spinlock rlock
         0 |           arch_spinlock_t raw_lock
         0 |             volatile unsigned int slock
         4 |           unsigned int magic
         8 |           unsigned int owner_cpu
        12 |           void * owner
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
         0 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |           u8[16] __padding
        16 |           struct lockdep_map dep_map
        16 |             struct lock_class_key * key
        20 |             struct lock_class *[2] class_cache
        28 |             const char * name
        32 |             u8 wait_type_outer
        33 |             u8 wait_type_inner
        34 |             u8 lock_type
        36 |             int cpu
        40 |             unsigned long ip
        44 |     struct list_head head
        44 |       struct list_head * next
        48 |       struct list_head * prev
        52 |   struct fasync_struct * fasync_list
        56 |   unsigned long flags
        60 |   struct callback_head rcu
        60 |     struct callback_head * next
        64 |     void (*)(struct callback_head *) func
           | [sizeof=68, align=4]

*** Dumping AST Record Layout
         0 | struct socket
         0 |   socket_state state
         4 |   short type
         8 |   unsigned long flags
        12 |   struct file * file
        16 |   struct sock * sk
        20 |   const struct proto_ops * ops
        24 |   struct socket_wq wq
        24 |     struct wait_queue_head wait
        24 |       struct spinlock lock
        24 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        24 |           struct raw_spinlock rlock
        24 |             arch_spinlock_t raw_lock
        24 |               volatile unsigned int slock
        28 |             unsigned int magic
        32 |             unsigned int owner_cpu
        36 |             void * owner
        40 |             struct lockdep_map dep_map
        40 |               struct lock_class_key * key
        44 |               struct lock_class *[2] class_cache
        52 |               const char * name
        56 |               u8 wait_type_outer
        57 |               u8 wait_type_inner
        58 |               u8 lock_type
        60 |               int cpu
        64 |               unsigned long ip
        24 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        24 |             u8[16] __padding
        40 |             struct lockdep_map dep_map
        40 |               struct lock_class_key * key
        44 |               struct lock_class *[2] class_cache
        52 |               const char * name
        56 |               u8 wait_type_outer
        57 |               u8 wait_type_inner
        58 |               u8 lock_type
        60 |               int cpu
        64 |               unsigned long ip
        68 |       struct list_head head
        68 |         struct list_head * next
        72 |         struct list_head * prev
        76 |     struct fasync_struct * fasync_list
        80 |     unsigned long flags
        84 |     struct callback_head rcu
        84 |       struct callback_head * next
        88 |       void (*)(struct callback_head *) func
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct compat_cmsghdr
         0 |   compat_size_t cmsg_len
         4 |   compat_int_t cmsg_level
         8 |   compat_int_t cmsg_type
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ucred
         0 |   __u32 pid
         4 |   __u32 uid
         8 |   __u32 gid
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct netlink_kernel_cfg
         0 |   unsigned int groups
         4 |   unsigned int flags
         8 |   void (*)(struct sk_buff *) input
        12 |   int (*)(struct net *, int) bind
        16 |   void (*)(struct net *, int) unbind
        20 |   void (*)(struct sock *, unsigned long *) release
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct nlmsghdr
         0 |   __u32 nlmsg_len
         4 |   __u16 nlmsg_type
         6 |   __u16 nlmsg_flags
         8 |   __u32 nlmsg_seq
        12 |   __u32 nlmsg_pid
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct netlink_dump_control
         0 |   int (*)(struct netlink_callback *) start
         4 |   int (*)(struct sk_buff *, struct netlink_callback *) dump
         8 |   int (*)(struct netlink_callback *) done
        12 |   struct netlink_ext_ack * extack
        16 |   void * data
        20 |   struct module * module
        24 |   u32 min_dump_alloc
        28 |   int flags
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ethtool_tcpip4_spec
         0 |   __be32 ip4src
         4 |   __be32 ip4dst
         8 |   __be16 psrc
        10 |   __be16 pdst
        12 |   __u8 tos
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ethtool_get_features_block
         0 |   __u32 available
         4 |   __u32 requested
         8 |   __u32 active
        12 |   __u32 never_changed
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ethtool_set_features_block
         0 |   __u32 valid
         4 |   __u32 requested
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ethtool_link_settings
         0 |   __u32 cmd
         4 |   __u32 speed
         8 |   __u8 duplex
         9 |   __u8 port
        10 |   __u8 phy_address
        11 |   __u8 autoneg
        12 |   __u8 mdio_support
        13 |   __u8 eth_tp_mdix
        14 |   __u8 eth_tp_mdix_ctrl
        15 |   __s8 link_mode_masks_nwords
        16 |   __u8 transceiver
        17 |   __u8 master_slave_cfg
        18 |   __u8 master_slave_state
        19 |   __u8 rate_matching
        20 |   __u32[7] reserved
        48 |   __u32[] link_mode_masks
           | [sizeof=48, align=4]

*** Dumping AST Record Layout
         0 | struct ethtool_eth_mac_stats::(anonymous at ../include/linux/ethtool.h:379:2)
         0 |   u64 FramesTransmittedOK
         8 |   u64 SingleCollisionFrames
        16 |   u64 MultipleCollisionFrames
        24 |   u64 FramesReceivedOK
        32 |   u64 FrameCheckSequenceErrors
        40 |   u64 AlignmentErrors
        48 |   u64 OctetsTransmittedOK
        56 |   u64 FramesWithDeferredXmissions
        64 |   u64 LateCollisions
        72 |   u64 FramesAbortedDueToXSColls
        80 |   u64 FramesLostDueToIntMACXmitError
        88 |   u64 CarrierSenseErrors
        96 |   u64 OctetsReceivedOK
       104 |   u64 FramesLostDueToIntMACRcvError
       112 |   u64 MulticastFramesXmittedOK
       120 |   u64 BroadcastFramesXmittedOK
       128 |   u64 FramesWithExcessiveDeferral
       136 |   u64 MulticastFramesReceivedOK
       144 |   u64 BroadcastFramesReceivedOK
       152 |   u64 InRangeLengthErrors
       160 |   u64 OutOfRangeLengthField
       168 |   u64 FrameTooLongErrors
           | [sizeof=176, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_eth_phy_stats::(anonymous at ../include/linux/ethtool.h:410:2)
         0 |   u64 SymbolErrorDuringCarrier
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_eth_ctrl_stats::(anonymous at ../include/linux/ethtool.h:420:2)
         0 |   u64 MACControlFramesTransmitted
         8 |   u64 MACControlFramesReceived
        16 |   u64 UnsupportedOpcodesReceived
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_pause_stats::(anonymous at ../include/linux/ethtool.h:445:2)
         0 |   u64 tx_pause_frames
         8 |   u64 rx_pause_frames
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_fec_stat
         0 |   u64 total
         8 |   u64[8] lanes
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_rmon_stats::(anonymous at ../include/linux/ethtool.h:518:2)
         0 |   u64 undersize_pkts
         8 |   u64 oversize_pkts
        16 |   u64 fragments
        24 |   u64 jabbers
        32 |   u64[10] hist
       112 |   u64[10] hist_tx
           | [sizeof=192, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_ts_stats::(anonymous at ../include/linux/ethtool.h:542:2)
         0 |   u64 pkts
         8 |   u64 lost
        16 |   u64 err
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ethtool_ts_stats::(unnamed at ../include/linux/ethtool.h:542:2)
         0 |   u64 pkts
         8 |   u64 lost
        16 |   u64 err
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | union ethtool_ts_stats::(anonymous at ../include/linux/ethtool.h:542:2)
         0 |   struct ethtool_ts_stats::(anonymous at ../include/linux/ethtool.h:542:2) 
         0 |     u64 pkts
         8 |     u64 lost
        16 |     u64 err
         0 |   struct ethtool_ts_stats::(unnamed at ../include/linux/ethtool.h:542:2) tx_stats
         0 |     u64 pkts
         8 |     u64 lost
        16 |     u64 err
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct sockaddr_in6
         0 |   unsigned short sin6_family
         2 |   __be16 sin6_port
         4 |   __be32 sin6_flowinfo
         8 |   struct in6_addr sin6_addr
         8 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         8 |       __u8[16] u6_addr8
         8 |       __be16[8] u6_addr16
         8 |       __be32[4] u6_addr32
        24 |   __u32 sin6_scope_id
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct ipv6_rt_hdr
         0 |   __u8 nexthdr
         1 |   __u8 hdrlen
         2 |   __u8 type
         3 |   __u8 segments_left
           | [sizeof=4, align=1]

*** Dumping AST Record Layout
         0 | struct ipv6hdr::(anonymous at ../include/uapi/linux/ipv6.h:134:2)
         0 |   struct in6_addr saddr
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
        16 |   struct in6_addr daddr
        16 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |       __u8[16] u6_addr8
        16 |       __be16[8] u6_addr16
        16 |       __be32[4] u6_addr32
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct minmax_sample
         0 |   u32 t
         4 |   u32 v
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct dql
         0 |   unsigned int num_queued
         4 |   unsigned int adj_limit
         8 |   unsigned int last_obj_cnt
        12 |   unsigned short stall_thrs
        16 |   unsigned long history_head
        20 |   unsigned long[4] history
        36 |   unsigned int limit
        40 |   unsigned int num_completed
        44 |   unsigned int prev_ovlimit
        48 |   unsigned int prev_num_queued
        52 |   unsigned int prev_last_obj_cnt
        56 |   unsigned int lowest_slack
        60 |   unsigned long slack_start_time
        64 |   unsigned int max_limit
        68 |   unsigned int min_limit
        72 |   unsigned int slack_hold_time
        76 |   unsigned short stall_max
        80 |   unsigned long last_reap
        84 |   unsigned long stall_cnt
           | [sizeof=88, align=4]

*** Dumping AST Record Layout
         0 | struct unix_table
         0 |   spinlock_t * locks
         4 |   struct hlist_head * buckets
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct frag_v4_compare_key
         0 |   __be32 saddr
         4 |   __be32 daddr
         8 |   u32 user
        12 |   u32 vif
        16 |   __be16 id
        18 |   u16 protocol
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct frag_v6_compare_key
         0 |   struct in6_addr saddr
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
        16 |   struct in6_addr daddr
        16 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |       __u8[16] u6_addr8
        16 |       __be16[8] u6_addr16
        16 |       __be32[4] u6_addr32
        32 |   u32 user
        36 |   __be32 id
        40 |   u32 iif
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | union inet_frag_queue::(unnamed at ../include/net/inet_frag.h:87:2)
         0 |   struct frag_v4_compare_key v4
         0 |     __be32 saddr
         4 |     __be32 daddr
         8 |     u32 user
        12 |     u32 vif
        16 |     __be16 id
        18 |     u16 protocol
         0 |   struct frag_v6_compare_key v6
         0 |     struct in6_addr saddr
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |     struct in6_addr daddr
        16 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |         __u8[16] u6_addr8
        16 |         __be16[8] u6_addr16
        16 |         __be32[4] u6_addr32
        32 |     u32 user
        36 |     __be32 id
        40 |     u32 iif
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct inet_frag_queue
         0 |   struct rhash_head node
         0 |     struct rhash_head * next
         4 |   union inet_frag_queue::(unnamed at ../include/net/inet_frag.h:87:2) key
         4 |     struct frag_v4_compare_key v4
         4 |       __be32 saddr
         8 |       __be32 daddr
        12 |       u32 user
        16 |       u32 vif
        20 |       __be16 id
        22 |       u16 protocol
         4 |     struct frag_v6_compare_key v6
         4 |       struct in6_addr saddr
         4 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         4 |           __u8[16] u6_addr8
         4 |           __be16[8] u6_addr16
         4 |           __be32[4] u6_addr32
        20 |       struct in6_addr daddr
        20 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        20 |           __u8[16] u6_addr8
        20 |           __be16[8] u6_addr16
        20 |           __be32[4] u6_addr32
        36 |       u32 user
        40 |       __be32 id
        44 |       u32 iif
        48 |   struct timer_list timer
        48 |     struct hlist_node entry
        48 |       struct hlist_node * next
        52 |       struct hlist_node ** pprev
        56 |     unsigned long expires
        60 |     void (*)(struct timer_list *) function
        64 |     u32 flags
        68 |     struct lockdep_map lockdep_map
        68 |       struct lock_class_key * key
        72 |       struct lock_class *[2] class_cache
        80 |       const char * name
        84 |       u8 wait_type_outer
        85 |       u8 wait_type_inner
        86 |       u8 lock_type
        88 |       int cpu
        92 |       unsigned long ip
        96 |   struct spinlock lock
        96 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        96 |       struct raw_spinlock rlock
        96 |         arch_spinlock_t raw_lock
        96 |           volatile unsigned int slock
       100 |         unsigned int magic
       104 |         unsigned int owner_cpu
       108 |         void * owner
       112 |         struct lockdep_map dep_map
       112 |           struct lock_class_key * key
       116 |           struct lock_class *[2] class_cache
       124 |           const char * name
       128 |           u8 wait_type_outer
       129 |           u8 wait_type_inner
       130 |           u8 lock_type
       132 |           int cpu
       136 |           unsigned long ip
        96 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        96 |         u8[16] __padding
       112 |         struct lockdep_map dep_map
       112 |           struct lock_class_key * key
       116 |           struct lock_class *[2] class_cache
       124 |           const char * name
       128 |           u8 wait_type_outer
       129 |           u8 wait_type_inner
       130 |           u8 lock_type
       132 |           int cpu
       136 |           unsigned long ip
       140 |   struct refcount_struct refcnt
       140 |     atomic_t refs
       140 |       int counter
       144 |   struct rb_root rb_fragments
       144 |     struct rb_node * rb_node
       148 |   struct sk_buff * fragments_tail
       152 |   struct sk_buff * last_run_head
       160 |   ktime_t stamp
       168 |   int len
       172 |   int meat
       176 |   u8 tstamp_type
       177 |   __u8 flags
       178 |   u16 max_size
       180 |   struct fqdir * fqdir
       184 |   struct callback_head rcu
       184 |     struct callback_head * next
       188 |     void (*)(struct callback_head *) func
           | [sizeof=192, align=8]

*** Dumping AST Record Layout
         0 | struct dst_ops
         0 |   unsigned short family
         4 |   unsigned int gc_thresh
         8 |   void (*)(struct dst_ops *) gc
        12 |   struct dst_entry *(*)(struct dst_entry *, __u32) check
        16 |   unsigned int (*)(const struct dst_entry *) default_advmss
        20 |   unsigned int (*)(const struct dst_entry *) mtu
        24 |   u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
        28 |   void (*)(struct dst_entry *) destroy
        32 |   void (*)(struct dst_entry *, struct net_device *) ifdown
        36 |   void (*)(struct sock *, struct dst_entry *) negative_advice
        40 |   void (*)(struct sk_buff *) link_failure
        44 |   void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
        48 |   void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
        52 |   int (*)(struct net *, struct sock *, struct sk_buff *) local_out
        56 |   struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
        60 |   void (*)(const struct dst_entry *, const void *) confirm_neigh
        64 |   struct kmem_cache * kmem_cachep
        72 |   struct percpu_counter pcpuc_entries
        72 |     s64 count
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct netns_sysctl_lowpan
         0 |   struct ctl_table_header * frags_hdr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct in_addr
         0 |   __be32 s_addr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ip_msfilter::(unnamed at ../include/uapi/linux/in.h:204:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct group_filter::(anonymous at ../include/uapi/linux/in.h:225:3)
         0 |   __u32 gf_interface_aux
         4 |   struct __kernel_sockaddr_storage gf_group_aux
         4 |     union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |       struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |         __kernel_sa_family_t ss_family
         6 |         char[126] __data
         4 |       void * __align
       132 |   __u32 gf_fmode_aux
       136 |   __u32 gf_numsrc_aux
       140 |   struct __kernel_sockaddr_storage[1] gf_slist
           | [sizeof=268, align=4]

*** Dumping AST Record Layout
         0 | struct group_filter::(anonymous at ../include/uapi/linux/in.h:232:3)
         0 |   __u32 gf_interface
         4 |   struct __kernel_sockaddr_storage gf_group
         4 |     union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |       struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |         __kernel_sa_family_t ss_family
         6 |         char[126] __data
         4 |       void * __align
       132 |   __u32 gf_fmode
       136 |   __u32 gf_numsrc
       140 |   struct __kernel_sockaddr_storage[] gf_slist_flex
           | [sizeof=140, align=4]

*** Dumping AST Record Layout
         0 | union group_filter::(anonymous at ../include/uapi/linux/in.h:224:2)
         0 |   struct group_filter::(anonymous at ../include/uapi/linux/in.h:225:3) 
         0 |     __u32 gf_interface_aux
         4 |     struct __kernel_sockaddr_storage gf_group_aux
         4 |       union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |         struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |           __kernel_sa_family_t ss_family
         6 |           char[126] __data
         4 |         void * __align
       132 |     __u32 gf_fmode_aux
       136 |     __u32 gf_numsrc_aux
       140 |     struct __kernel_sockaddr_storage[1] gf_slist
         0 |   struct group_filter::(anonymous at ../include/uapi/linux/in.h:232:3) 
         0 |     __u32 gf_interface
         4 |     struct __kernel_sockaddr_storage gf_group
         4 |       union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         4 |         struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         4 |           __kernel_sa_family_t ss_family
         6 |           char[126] __data
         4 |         void * __align
       132 |     __u32 gf_fmode
       136 |     __u32 gf_numsrc
       140 |     struct __kernel_sockaddr_storage[] gf_slist_flex
           | [sizeof=268, align=4]

*** Dumping AST Record Layout
         0 | xfrm_address_t
         0 |   __be32 a4
         0 |   __be32[4] a6
         0 |   struct in6_addr in6
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_id
         0 |   xfrm_address_t daddr
         0 |     __be32 a4
         0 |     __be32[4] a6
         0 |     struct in6_addr in6
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |   __be32 spi
        20 |   __u8 proto
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_selector
         0 |   xfrm_address_t daddr
         0 |     __be32 a4
         0 |     __be32[4] a6
         0 |     struct in6_addr in6
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |   xfrm_address_t saddr
        16 |     __be32 a4
        16 |     __be32[4] a6
        16 |     struct in6_addr in6
        16 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |         __u8[16] u6_addr8
        16 |         __be16[8] u6_addr16
        16 |         __be32[4] u6_addr32
        32 |   __be16 dport
        34 |   __be16 dport_mask
        36 |   __be16 sport
        38 |   __be16 sport_mask
        40 |   __u16 family
        42 |   __u8 prefixlen_d
        43 |   __u8 prefixlen_s
        44 |   __u8 proto
        48 |   int ifindex
        52 |   __kernel_uid32_t user
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_usersa_id
         0 |   xfrm_address_t daddr
         0 |     __be32 a4
         0 |     __be32[4] a6
         0 |     struct in6_addr in6
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |   __be32 spi
        20 |   __u16 family
        22 |   __u8 proto
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_lifetime_cfg
         0 |   __u64 soft_byte_limit
         8 |   __u64 hard_byte_limit
        16 |   __u64 soft_packet_limit
        24 |   __u64 hard_packet_limit
        32 |   __u64 soft_add_expires_seconds
        40 |   __u64 hard_add_expires_seconds
        48 |   __u64 soft_use_expires_seconds
        56 |   __u64 hard_use_expires_seconds
           | [sizeof=64, align=8]

*** Dumping AST Record Layout
         0 | struct xfrm_lifetime_cur
         0 |   __u64 bytes
         8 |   __u64 packets
        16 |   __u64 add_time
        24 |   __u64 use_time
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct xfrm_stats
         0 |   __u32 replay_window
         4 |   __u32 replay
         8 |   __u32 integrity_failed
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_usersa_info
         0 |   struct xfrm_selector sel
         0 |     xfrm_address_t daddr
         0 |       __be32 a4
         0 |       __be32[4] a6
         0 |       struct in6_addr in6
         0 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |           __u8[16] u6_addr8
         0 |           __be16[8] u6_addr16
         0 |           __be32[4] u6_addr32
        16 |     xfrm_address_t saddr
        16 |       __be32 a4
        16 |       __be32[4] a6
        16 |       struct in6_addr in6
        16 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |           __u8[16] u6_addr8
        16 |           __be16[8] u6_addr16
        16 |           __be32[4] u6_addr32
        32 |     __be16 dport
        34 |     __be16 dport_mask
        36 |     __be16 sport
        38 |     __be16 sport_mask
        40 |     __u16 family
        42 |     __u8 prefixlen_d
        43 |     __u8 prefixlen_s
        44 |     __u8 proto
        48 |     int ifindex
        52 |     __kernel_uid32_t user
        56 |   struct xfrm_id id
        56 |     xfrm_address_t daddr
        56 |       __be32 a4
        56 |       __be32[4] a6
        56 |       struct in6_addr in6
        56 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |           __u8[16] u6_addr8
        56 |           __be16[8] u6_addr16
        56 |           __be32[4] u6_addr32
        72 |     __be32 spi
        76 |     __u8 proto
        80 |   xfrm_address_t saddr
        80 |     __be32 a4
        80 |     __be32[4] a6
        80 |     struct in6_addr in6
        80 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        80 |         __u8[16] u6_addr8
        80 |         __be16[8] u6_addr16
        80 |         __be32[4] u6_addr32
        96 |   struct xfrm_lifetime_cfg lft
        96 |     __u64 soft_byte_limit
       104 |     __u64 hard_byte_limit
       112 |     __u64 soft_packet_limit
       120 |     __u64 hard_packet_limit
       128 |     __u64 soft_add_expires_seconds
       136 |     __u64 hard_add_expires_seconds
       144 |     __u64 soft_use_expires_seconds
       152 |     __u64 hard_use_expires_seconds
       160 |   struct xfrm_lifetime_cur curlft
       160 |     __u64 bytes
       168 |     __u64 packets
       176 |     __u64 add_time
       184 |     __u64 use_time
       192 |   struct xfrm_stats stats
       192 |     __u32 replay_window
       196 |     __u32 replay
       200 |     __u32 integrity_failed
       204 |   __u32 seq
       208 |   __u32 reqid
       212 |   __u16 family
       214 |   __u8 mode
       215 |   __u8 replay_window
       216 |   __u8 flags
           | [sizeof=224, align=8]

*** Dumping AST Record Layout
         0 | struct xfrm_userpolicy_info
         0 |   struct xfrm_selector sel
         0 |     xfrm_address_t daddr
         0 |       __be32 a4
         0 |       __be32[4] a6
         0 |       struct in6_addr in6
         0 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |           __u8[16] u6_addr8
         0 |           __be16[8] u6_addr16
         0 |           __be32[4] u6_addr32
        16 |     xfrm_address_t saddr
        16 |       __be32 a4
        16 |       __be32[4] a6
        16 |       struct in6_addr in6
        16 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |           __u8[16] u6_addr8
        16 |           __be16[8] u6_addr16
        16 |           __be32[4] u6_addr32
        32 |     __be16 dport
        34 |     __be16 dport_mask
        36 |     __be16 sport
        38 |     __be16 sport_mask
        40 |     __u16 family
        42 |     __u8 prefixlen_d
        43 |     __u8 prefixlen_s
        44 |     __u8 proto
        48 |     int ifindex
        52 |     __kernel_uid32_t user
        56 |   struct xfrm_lifetime_cfg lft
        56 |     __u64 soft_byte_limit
        64 |     __u64 hard_byte_limit
        72 |     __u64 soft_packet_limit
        80 |     __u64 hard_packet_limit
        88 |     __u64 soft_add_expires_seconds
        96 |     __u64 hard_add_expires_seconds
       104 |     __u64 soft_use_expires_seconds
       112 |     __u64 hard_use_expires_seconds
       120 |   struct xfrm_lifetime_cur curlft
       120 |     __u64 bytes
       128 |     __u64 packets
       136 |     __u64 add_time
       144 |     __u64 use_time
       152 |   __u32 priority
       156 |   __u32 index
       160 |   __u8 dir
       161 |   __u8 action
       162 |   __u8 flags
       163 |   __u8 share
           | [sizeof=168, align=8]

*** Dumping AST Record Layout
         0 | struct xfrm_policy_hash
         0 |   struct hlist_head * table
         4 |   unsigned int hmask
         8 |   u8 dbits4
         9 |   u8 sbits4
        10 |   u8 dbits6
        11 |   u8 sbits6
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ref_tracker_dir
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   unsigned int quarantine_avail
        48 |   struct refcount_struct untracked
        48 |     atomic_t refs
        48 |       int counter
        52 |   struct refcount_struct no_tracker
        52 |     atomic_t refs
        52 |       int counter
        56 |   bool dead
        60 |   struct list_head list
        60 |     struct list_head * next
        64 |     struct list_head * prev
        68 |   struct list_head quarantine
        68 |     struct list_head * next
        72 |     struct list_head * prev
        76 |   char[32] name
           | [sizeof=108, align=4]

*** Dumping AST Record Layout
         0 | struct raw_notifier_head
         0 |   struct notifier_block * head
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct netns_core
         0 |   struct ctl_table_header * sysctl_hdr
         4 |   int sysctl_somaxconn
         8 |   int sysctl_optmem_max
        12 |   u8 sysctl_txrehash
        16 |   struct prot_inuse * prot_inuse
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct netns_mib
         0 |   typeof(struct ipstats_mib) * ip_statistics
         4 |   typeof(struct ipstats_mib) * ipv6_statistics
         8 |   typeof(struct tcp_mib) * tcp_statistics
        12 |   typeof(struct linux_mib) * net_statistics
        16 |   typeof(struct udp_mib) * udp_statistics
        20 |   typeof(struct udp_mib) * udp_stats_in6
        24 |   typeof(struct linux_tls_mib) * tls_statistics
        28 |   typeof(struct udp_mib) * udplite_statistics
        32 |   typeof(struct udp_mib) * udplite_stats_in6
        36 |   typeof(struct icmp_mib) * icmp_statistics
        40 |   typeof(struct icmpmsg_mib) * icmpmsg_statistics
        44 |   typeof(struct icmpv6_mib) * icmpv6_statistics
        48 |   typeof(struct icmpv6msg_mib) * icmpv6msg_statistics
        52 |   struct proc_dir_entry * proc_net_devsnmp6
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct netns_packet
         0 |   struct mutex sklist_lock
         0 |     atomic_t owner
         0 |       int counter
         4 |     struct raw_spinlock wait_lock
         4 |       arch_spinlock_t raw_lock
         4 |         volatile unsigned int slock
         8 |       unsigned int magic
        12 |       unsigned int owner_cpu
        16 |       void * owner
        20 |       struct lockdep_map dep_map
        20 |         struct lock_class_key * key
        24 |         struct lock_class *[2] class_cache
        32 |         const char * name
        36 |         u8 wait_type_outer
        37 |         u8 wait_type_inner
        38 |         u8 lock_type
        40 |         int cpu
        44 |         unsigned long ip
        48 |     struct list_head wait_list
        48 |       struct list_head * next
        52 |       struct list_head * prev
        56 |     void * magic
        60 |     struct lockdep_map dep_map
        60 |       struct lock_class_key * key
        64 |       struct lock_class *[2] class_cache
        72 |       const char * name
        76 |       u8 wait_type_outer
        77 |       u8 wait_type_inner
        78 |       u8 lock_type
        80 |       int cpu
        84 |       unsigned long ip
        88 |   struct hlist_head sklist
        88 |     struct hlist_node * first
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct netns_unix
         0 |   struct unix_table table
         0 |     spinlock_t * locks
         4 |     struct hlist_head * buckets
         8 |   int sysctl_max_dgram_qlen
        12 |   struct ctl_table_header * ctl
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct blocking_notifier_head
         0 |   struct rw_semaphore rwsem
         0 |     atomic_t count
         0 |       int counter
         4 |     atomic_t owner
         4 |       int counter
         8 |     struct raw_spinlock wait_lock
         8 |       arch_spinlock_t raw_lock
         8 |         volatile unsigned int slock
        12 |       unsigned int magic
        16 |       unsigned int owner_cpu
        20 |       void * owner
        24 |       struct lockdep_map dep_map
        24 |         struct lock_class_key * key
        28 |         struct lock_class *[2] class_cache
        36 |         const char * name
        40 |         u8 wait_type_outer
        41 |         u8 wait_type_inner
        42 |         u8 lock_type
        44 |         int cpu
        48 |         unsigned long ip
        52 |     struct list_head wait_list
        52 |       struct list_head * next
        56 |       struct list_head * prev
        60 |     void * magic
        64 |     struct lockdep_map dep_map
        64 |       struct lock_class_key * key
        68 |       struct lock_class *[2] class_cache
        76 |       const char * name
        80 |       u8 wait_type_outer
        81 |       u8 wait_type_inner
        82 |       u8 lock_type
        84 |       int cpu
        88 |       unsigned long ip
        92 |   struct notifier_block * head
           | [sizeof=96, align=4]

*** Dumping AST Record Layout
         0 | struct netns_nexthop
         0 |   struct rb_root rb_root
         0 |     struct rb_node * rb_node
         4 |   struct hlist_head * devhash
         8 |   unsigned int seq
        12 |   u32 last_id_allocated
        16 |   struct blocking_notifier_head notifier_chain
        16 |     struct rw_semaphore rwsem
        16 |       atomic_t count
        16 |         int counter
        20 |       atomic_t owner
        20 |         int counter
        24 |       struct raw_spinlock wait_lock
        24 |         arch_spinlock_t raw_lock
        24 |           volatile unsigned int slock
        28 |         unsigned int magic
        32 |         unsigned int owner_cpu
        36 |         void * owner
        40 |         struct lockdep_map dep_map
        40 |           struct lock_class_key * key
        44 |           struct lock_class *[2] class_cache
        52 |           const char * name
        56 |           u8 wait_type_outer
        57 |           u8 wait_type_inner
        58 |           u8 lock_type
        60 |           int cpu
        64 |           unsigned long ip
        68 |       struct list_head wait_list
        68 |         struct list_head * next
        72 |         struct list_head * prev
        76 |       void * magic
        80 |       struct lockdep_map dep_map
        80 |         struct lock_class_key * key
        84 |         struct lock_class *[2] class_cache
        92 |         const char * name
        96 |         u8 wait_type_outer
        97 |         u8 wait_type_inner
        98 |         u8 lock_type
       100 |         int cpu
       104 |         unsigned long ip
       108 |     struct notifier_block * head
           | [sizeof=112, align=4]

*** Dumping AST Record Layout
         0 | struct inet_timewait_death_row
         0 |   struct refcount_struct tw_refcount
         0 |     atomic_t refs
         0 |       int counter
         4 |   struct inet_hashinfo * hashinfo
         8 |   int sysctl_max_tw_buckets
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct local_ports
         0 |   u32 range
         4 |   bool warned
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ping_group_range
         0 |   seqlock_t lock
         0 |     struct seqcount_spinlock seqcount
         0 |       struct seqcount seqcount
         0 |         unsigned int sequence
         4 |         struct lockdep_map dep_map
         4 |           struct lock_class_key * key
         8 |           struct lock_class *[2] class_cache
        16 |           const char * name
        20 |           u8 wait_type_outer
        21 |           u8 wait_type_inner
        22 |           u8 lock_type
        24 |           int cpu
        28 |           unsigned long ip
        32 |       spinlock_t * lock
        36 |     struct spinlock lock
        36 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        36 |         struct raw_spinlock rlock
        36 |           arch_spinlock_t raw_lock
        36 |             volatile unsigned int slock
        40 |           unsigned int magic
        44 |           unsigned int owner_cpu
        48 |           void * owner
        52 |           struct lockdep_map dep_map
        52 |             struct lock_class_key * key
        56 |             struct lock_class *[2] class_cache
        64 |             const char * name
        68 |             u8 wait_type_outer
        69 |             u8 wait_type_inner
        70 |             u8 lock_type
        72 |             int cpu
        76 |             unsigned long ip
        36 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        36 |           u8[16] __padding
        52 |           struct lockdep_map dep_map
        52 |             struct lock_class_key * key
        56 |             struct lock_class *[2] class_cache
        64 |             const char * name
        68 |             u8 wait_type_outer
        69 |             u8 wait_type_inner
        70 |             u8 lock_type
        72 |             int cpu
        76 |             unsigned long ip
        80 |   kgid_t[2] range
           | [sizeof=88, align=4]

*** Dumping AST Record Layout
         0 | struct netns_ipv4
         0 |   __u8[0] __cacheline_group_begin__netns_ipv4_read_tx
         0 |   u8 sysctl_tcp_early_retrans
         1 |   u8 sysctl_tcp_tso_win_divisor
         2 |   u8 sysctl_tcp_tso_rtt_log
         3 |   u8 sysctl_tcp_autocorking
         4 |   int sysctl_tcp_min_snd_mss
         8 |   unsigned int sysctl_tcp_notsent_lowat
        12 |   int sysctl_tcp_limit_output_bytes
        16 |   int sysctl_tcp_min_rtt_wlen
        20 |   int[3] sysctl_tcp_wmem
        32 |   u8 sysctl_ip_fwd_use_pmtu
        33 |   __u8[0] __cacheline_group_end__netns_ipv4_read_tx
        33 |   __u8[0] __cacheline_group_begin__netns_ipv4_read_txrx
        33 |   u8 sysctl_tcp_moderate_rcvbuf
        34 |   __u8[0] __cacheline_group_end__netns_ipv4_read_txrx
        34 |   __u8[0] __cacheline_group_begin__netns_ipv4_read_rx
        34 |   u8 sysctl_ip_early_demux
        35 |   u8 sysctl_tcp_early_demux
        36 |   int sysctl_tcp_reordering
        40 |   int[3] sysctl_tcp_rmem
        52 |   __u8[0] __cacheline_group_end__netns_ipv4_read_rx
        52 |   struct inet_timewait_death_row tcp_death_row
        52 |     struct refcount_struct tw_refcount
        52 |       atomic_t refs
        52 |         int counter
        56 |     struct inet_hashinfo * hashinfo
        60 |     int sysctl_max_tw_buckets
        64 |   struct udp_table * udp_table
        68 |   struct ctl_table_header * forw_hdr
        72 |   struct ctl_table_header * frags_hdr
        76 |   struct ctl_table_header * ipv4_hdr
        80 |   struct ctl_table_header * route_hdr
        84 |   struct ctl_table_header * xfrm4_hdr
        88 |   struct ipv4_devconf * devconf_all
        92 |   struct ipv4_devconf * devconf_dflt
        96 |   struct ip_ra_chain * ra_chain
       100 |   struct mutex ra_mutex
       100 |     atomic_t owner
       100 |       int counter
       104 |     struct raw_spinlock wait_lock
       104 |       arch_spinlock_t raw_lock
       104 |         volatile unsigned int slock
       108 |       unsigned int magic
       112 |       unsigned int owner_cpu
       116 |       void * owner
       120 |       struct lockdep_map dep_map
       120 |         struct lock_class_key * key
       124 |         struct lock_class *[2] class_cache
       132 |         const char * name
       136 |         u8 wait_type_outer
       137 |         u8 wait_type_inner
       138 |         u8 lock_type
       140 |         int cpu
       144 |         unsigned long ip
       148 |     struct list_head wait_list
       148 |       struct list_head * next
       152 |       struct list_head * prev
       156 |     void * magic
       160 |     struct lockdep_map dep_map
       160 |       struct lock_class_key * key
       164 |       struct lock_class *[2] class_cache
       172 |       const char * name
       176 |       u8 wait_type_outer
       177 |       u8 wait_type_inner
       178 |       u8 lock_type
       180 |       int cpu
       184 |       unsigned long ip
       188 |   bool fib_has_custom_local_routes
       189 |   bool fib_offload_disabled
       190 |   u8 sysctl_tcp_shrink_window
       192 |   atomic_t fib_num_tclassid_users
       192 |     int counter
       196 |   struct hlist_head * fib_table_hash
       200 |   struct sock * fibnl
       204 |   struct sock * mc_autojoin_sk
       208 |   struct inet_peer_base * peers
       212 |   struct fqdir * fqdir
       216 |   u8 sysctl_icmp_echo_ignore_all
       217 |   u8 sysctl_icmp_echo_enable_probe
       218 |   u8 sysctl_icmp_echo_ignore_broadcasts
       219 |   u8 sysctl_icmp_ignore_bogus_error_responses
       220 |   u8 sysctl_icmp_errors_use_inbound_ifaddr
       224 |   int sysctl_icmp_ratelimit
       228 |   int sysctl_icmp_ratemask
       232 |   u32 ip_rt_min_pmtu
       236 |   int ip_rt_mtu_expires
       240 |   int ip_rt_min_advmss
       244 |   struct local_ports ip_local_ports
       244 |     u32 range
       248 |     bool warned
       252 |   u8 sysctl_tcp_ecn
       253 |   u8 sysctl_tcp_ecn_fallback
       254 |   u8 sysctl_ip_default_ttl
       255 |   u8 sysctl_ip_no_pmtu_disc
       256 |   u8 sysctl_ip_fwd_update_priority
       257 |   u8 sysctl_ip_nonlocal_bind
       258 |   u8 sysctl_ip_autobind_reuse
       259 |   u8 sysctl_ip_dynaddr
       260 |   u8 sysctl_udp_early_demux
       261 |   u8 sysctl_nexthop_compat_mode
       262 |   u8 sysctl_fwmark_reflect
       263 |   u8 sysctl_tcp_fwmark_accept
       264 |   u8 sysctl_tcp_mtu_probing
       268 |   int sysctl_tcp_mtu_probe_floor
       272 |   int sysctl_tcp_base_mss
       276 |   int sysctl_tcp_probe_threshold
       280 |   u32 sysctl_tcp_probe_interval
       284 |   int sysctl_tcp_keepalive_time
       288 |   int sysctl_tcp_keepalive_intvl
       292 |   u8 sysctl_tcp_keepalive_probes
       293 |   u8 sysctl_tcp_syn_retries
       294 |   u8 sysctl_tcp_synack_retries
       295 |   u8 sysctl_tcp_syncookies
       296 |   u8 sysctl_tcp_migrate_req
       297 |   u8 sysctl_tcp_comp_sack_nr
       298 |   u8 sysctl_tcp_backlog_ack_defer
       299 |   u8 sysctl_tcp_pingpong_thresh
       300 |   u8 sysctl_tcp_retries1
       301 |   u8 sysctl_tcp_retries2
       302 |   u8 sysctl_tcp_orphan_retries
       303 |   u8 sysctl_tcp_tw_reuse
       304 |   int sysctl_tcp_fin_timeout
       308 |   u8 sysctl_tcp_sack
       309 |   u8 sysctl_tcp_window_scaling
       310 |   u8 sysctl_tcp_timestamps
       312 |   int sysctl_tcp_rto_min_us
       316 |   u8 sysctl_tcp_recovery
       317 |   u8 sysctl_tcp_thin_linear_timeouts
       318 |   u8 sysctl_tcp_slow_start_after_idle
       319 |   u8 sysctl_tcp_retrans_collapse
       320 |   u8 sysctl_tcp_stdurg
       321 |   u8 sysctl_tcp_rfc1337
       322 |   u8 sysctl_tcp_abort_on_overflow
       323 |   u8 sysctl_tcp_fack
       324 |   int sysctl_tcp_max_reordering
       328 |   int sysctl_tcp_adv_win_scale
       332 |   u8 sysctl_tcp_dsack
       333 |   u8 sysctl_tcp_app_win
       334 |   u8 sysctl_tcp_frto
       335 |   u8 sysctl_tcp_nometrics_save
       336 |   u8 sysctl_tcp_no_ssthresh_metrics_save
       337 |   u8 sysctl_tcp_workaround_signed_windows
       340 |   int sysctl_tcp_challenge_ack_limit
       344 |   u8 sysctl_tcp_min_tso_segs
       345 |   u8 sysctl_tcp_reflect_tos
       348 |   int sysctl_tcp_invalid_ratelimit
       352 |   int sysctl_tcp_pacing_ss_ratio
       356 |   int sysctl_tcp_pacing_ca_ratio
       360 |   unsigned int sysctl_tcp_child_ehash_entries
       364 |   unsigned long sysctl_tcp_comp_sack_delay_ns
       368 |   unsigned long sysctl_tcp_comp_sack_slack_ns
       372 |   int sysctl_max_syn_backlog
       376 |   int sysctl_tcp_fastopen
       380 |   const struct tcp_congestion_ops * tcp_congestion_control
       384 |   struct tcp_fastopen_context * tcp_fastopen_ctx
       388 |   unsigned int sysctl_tcp_fastopen_blackhole_timeout
       392 |   atomic_t tfo_active_disable_times
       392 |     int counter
       396 |   unsigned long tfo_active_disable_stamp
       400 |   u32 tcp_challenge_timestamp
       404 |   u32 tcp_challenge_count
       408 |   u8 sysctl_tcp_plb_enabled
       409 |   u8 sysctl_tcp_plb_idle_rehash_rounds
       410 |   u8 sysctl_tcp_plb_rehash_rounds
       411 |   u8 sysctl_tcp_plb_suspend_rto_sec
       412 |   int sysctl_tcp_plb_cong_thresh
       416 |   int sysctl_udp_wmem_min
       420 |   int sysctl_udp_rmem_min
       424 |   u8 sysctl_fib_notify_on_flag_change
       425 |   u8 sysctl_tcp_syn_linear_timeouts
       426 |   u8 sysctl_igmp_llm_reports
       428 |   int sysctl_igmp_max_memberships
       432 |   int sysctl_igmp_max_msf
       436 |   int sysctl_igmp_qrv
       440 |   struct ping_group_range ping_group_range
       440 |     seqlock_t lock
       440 |       struct seqcount_spinlock seqcount
       440 |         struct seqcount seqcount
       440 |           unsigned int sequence
       444 |           struct lockdep_map dep_map
       444 |             struct lock_class_key * key
       448 |             struct lock_class *[2] class_cache
       456 |             const char * name
       460 |             u8 wait_type_outer
       461 |             u8 wait_type_inner
       462 |             u8 lock_type
       464 |             int cpu
       468 |             unsigned long ip
       472 |         spinlock_t * lock
       476 |       struct spinlock lock
       476 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       476 |           struct raw_spinlock rlock
       476 |             arch_spinlock_t raw_lock
       476 |               volatile unsigned int slock
       480 |             unsigned int magic
       484 |             unsigned int owner_cpu
       488 |             void * owner
       492 |             struct lockdep_map dep_map
       492 |               struct lock_class_key * key
       496 |               struct lock_class *[2] class_cache
       504 |               const char * name
       508 |               u8 wait_type_outer
       509 |               u8 wait_type_inner
       510 |               u8 lock_type
       512 |               int cpu
       516 |               unsigned long ip
       476 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       476 |             u8[16] __padding
       492 |             struct lockdep_map dep_map
       492 |               struct lock_class_key * key
       496 |               struct lock_class *[2] class_cache
       504 |               const char * name
       508 |               u8 wait_type_outer
       509 |               u8 wait_type_inner
       510 |               u8 lock_type
       512 |               int cpu
       516 |               unsigned long ip
       520 |     kgid_t[2] range
       528 |   atomic_t dev_addr_genid
       528 |     int counter
       532 |   unsigned int sysctl_udp_child_hash_entries
       536 |   unsigned long * sysctl_local_reserved_ports
       540 |   int sysctl_ip_prot_sock
       544 |   struct fib_notifier_ops * notifier_ops
       548 |   unsigned int fib_seq
       552 |   struct fib_notifier_ops * ipmr_notifier_ops
       556 |   unsigned int ipmr_seq
       560 |   atomic_t rt_genid
       560 |     int counter
       568 |   siphash_key_t ip_id_key
       568 |     u64[2] key
           | [sizeof=584, align=8]

*** Dumping AST Record Layout
         0 | struct netns_sysctl_ipv6
         0 |   struct ctl_table_header * hdr
         4 |   struct ctl_table_header * route_hdr
         8 |   struct ctl_table_header * icmp_hdr
        12 |   struct ctl_table_header * frags_hdr
        16 |   struct ctl_table_header * xfrm6_hdr
        20 |   int flush_delay
        24 |   int ip6_rt_max_size
        28 |   int ip6_rt_gc_min_interval
        32 |   int ip6_rt_gc_timeout
        36 |   int ip6_rt_gc_interval
        40 |   int ip6_rt_gc_elasticity
        44 |   int ip6_rt_mtu_expires
        48 |   int ip6_rt_min_advmss
        52 |   u32 multipath_hash_fields
        56 |   u8 multipath_hash_policy
        57 |   u8 bindv6only
        58 |   u8 flowlabel_consistency
        59 |   u8 auto_flowlabels
        60 |   int icmpv6_time
        64 |   u8 icmpv6_echo_ignore_all
        65 |   u8 icmpv6_echo_ignore_multicast
        66 |   u8 icmpv6_echo_ignore_anycast
        68 |   unsigned long[8] icmpv6_ratemask
       100 |   unsigned long * icmpv6_ratemask_ptr
       104 |   u8 anycast_src_echo_reply
       105 |   u8 ip_nonlocal_bind
       106 |   u8 fwmark_reflect
       107 |   u8 flowlabel_state_ranges
       108 |   int idgen_retries
       112 |   int idgen_delay
       116 |   int flowlabel_reflect
       120 |   int max_dst_opts_cnt
       124 |   int max_hbh_opts_cnt
       128 |   int max_dst_opts_len
       132 |   int max_hbh_opts_len
       136 |   int seg6_flowlabel
       140 |   u32 ioam6_id
       144 |   u64 ioam6_id_wide
       152 |   u8 skip_notify_on_dev_down
       153 |   u8 fib_notify_on_flag_change
       154 |   u8 icmpv6_error_anycast_as_unicast
           | [sizeof=160, align=8]

*** Dumping AST Record Layout
         0 | struct netns_ipv6::(unnamed at ../include/net/netns/ipv6.h:116:2)
         0 |   struct hlist_head head
         0 |     struct hlist_node * first
         4 |   struct spinlock lock
         4 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         4 |       struct raw_spinlock rlock
         4 |         arch_spinlock_t raw_lock
         4 |           volatile unsigned int slock
         8 |         unsigned int magic
        12 |         unsigned int owner_cpu
        16 |         void * owner
        20 |         struct lockdep_map dep_map
        20 |           struct lock_class_key * key
        24 |           struct lock_class *[2] class_cache
        32 |           const char * name
        36 |           u8 wait_type_outer
        37 |           u8 wait_type_inner
        38 |           u8 lock_type
        40 |           int cpu
        44 |           unsigned long ip
         4 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         4 |         u8[16] __padding
        20 |         struct lockdep_map dep_map
        20 |           struct lock_class_key * key
        24 |           struct lock_class *[2] class_cache
        32 |           const char * name
        36 |           u8 wait_type_outer
        37 |           u8 wait_type_inner
        38 |           u8 lock_type
        40 |           int cpu
        44 |           unsigned long ip
        48 |   u32 seq
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct netns_ipv6
         0 |   struct dst_ops ip6_dst_ops
         0 |     unsigned short family
         4 |     unsigned int gc_thresh
         8 |     void (*)(struct dst_ops *) gc
        12 |     struct dst_entry *(*)(struct dst_entry *, __u32) check
        16 |     unsigned int (*)(const struct dst_entry *) default_advmss
        20 |     unsigned int (*)(const struct dst_entry *) mtu
        24 |     u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
        28 |     void (*)(struct dst_entry *) destroy
        32 |     void (*)(struct dst_entry *, struct net_device *) ifdown
        36 |     void (*)(struct sock *, struct dst_entry *) negative_advice
        40 |     void (*)(struct sk_buff *) link_failure
        44 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
        48 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
        52 |     int (*)(struct net *, struct sock *, struct sk_buff *) local_out
        56 |     struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
        60 |     void (*)(const struct dst_entry *, const void *) confirm_neigh
        64 |     struct kmem_cache * kmem_cachep
        72 |     struct percpu_counter pcpuc_entries
        72 |       s64 count
        80 |   struct netns_sysctl_ipv6 sysctl
        80 |     struct ctl_table_header * hdr
        84 |     struct ctl_table_header * route_hdr
        88 |     struct ctl_table_header * icmp_hdr
        92 |     struct ctl_table_header * frags_hdr
        96 |     struct ctl_table_header * xfrm6_hdr
       100 |     int flush_delay
       104 |     int ip6_rt_max_size
       108 |     int ip6_rt_gc_min_interval
       112 |     int ip6_rt_gc_timeout
       116 |     int ip6_rt_gc_interval
       120 |     int ip6_rt_gc_elasticity
       124 |     int ip6_rt_mtu_expires
       128 |     int ip6_rt_min_advmss
       132 |     u32 multipath_hash_fields
       136 |     u8 multipath_hash_policy
       137 |     u8 bindv6only
       138 |     u8 flowlabel_consistency
       139 |     u8 auto_flowlabels
       140 |     int icmpv6_time
       144 |     u8 icmpv6_echo_ignore_all
       145 |     u8 icmpv6_echo_ignore_multicast
       146 |     u8 icmpv6_echo_ignore_anycast
       148 |     unsigned long[8] icmpv6_ratemask
       180 |     unsigned long * icmpv6_ratemask_ptr
       184 |     u8 anycast_src_echo_reply
       185 |     u8 ip_nonlocal_bind
       186 |     u8 fwmark_reflect
       187 |     u8 flowlabel_state_ranges
       188 |     int idgen_retries
       192 |     int idgen_delay
       196 |     int flowlabel_reflect
       200 |     int max_dst_opts_cnt
       204 |     int max_hbh_opts_cnt
       208 |     int max_dst_opts_len
       212 |     int max_hbh_opts_len
       216 |     int seg6_flowlabel
       220 |     u32 ioam6_id
       224 |     u64 ioam6_id_wide
       232 |     u8 skip_notify_on_dev_down
       233 |     u8 fib_notify_on_flag_change
       234 |     u8 icmpv6_error_anycast_as_unicast
       240 |   struct ipv6_devconf * devconf_all
       244 |   struct ipv6_devconf * devconf_dflt
       248 |   struct inet_peer_base * peers
       252 |   struct fqdir * fqdir
       256 |   struct fib6_info * fib6_null_entry
       260 |   struct rt6_info * ip6_null_entry
       264 |   struct rt6_statistics * rt6_stats
       268 |   struct timer_list ip6_fib_timer
       268 |     struct hlist_node entry
       268 |       struct hlist_node * next
       272 |       struct hlist_node ** pprev
       276 |     unsigned long expires
       280 |     void (*)(struct timer_list *) function
       284 |     u32 flags
       288 |     struct lockdep_map lockdep_map
       288 |       struct lock_class_key * key
       292 |       struct lock_class *[2] class_cache
       300 |       const char * name
       304 |       u8 wait_type_outer
       305 |       u8 wait_type_inner
       306 |       u8 lock_type
       308 |       int cpu
       312 |       unsigned long ip
       316 |   struct hlist_head * fib_table_hash
       320 |   struct fib6_table * fib6_main_tbl
       324 |   struct list_head fib6_walkers
       324 |     struct list_head * next
       328 |     struct list_head * prev
       332 |   rwlock_t fib6_walker_lock
       332 |     arch_rwlock_t raw_lock
       332 |     unsigned int magic
       336 |     unsigned int owner_cpu
       340 |     void * owner
       344 |     struct lockdep_map dep_map
       344 |       struct lock_class_key * key
       348 |       struct lock_class *[2] class_cache
       356 |       const char * name
       360 |       u8 wait_type_outer
       361 |       u8 wait_type_inner
       362 |       u8 lock_type
       364 |       int cpu
       368 |       unsigned long ip
       372 |   struct spinlock fib6_gc_lock
       372 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       372 |       struct raw_spinlock rlock
       372 |         arch_spinlock_t raw_lock
       372 |           volatile unsigned int slock
       376 |         unsigned int magic
       380 |         unsigned int owner_cpu
       384 |         void * owner
       388 |         struct lockdep_map dep_map
       388 |           struct lock_class_key * key
       392 |           struct lock_class *[2] class_cache
       400 |           const char * name
       404 |           u8 wait_type_outer
       405 |           u8 wait_type_inner
       406 |           u8 lock_type
       408 |           int cpu
       412 |           unsigned long ip
       372 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       372 |         u8[16] __padding
       388 |         struct lockdep_map dep_map
       388 |           struct lock_class_key * key
       392 |           struct lock_class *[2] class_cache
       400 |           const char * name
       404 |           u8 wait_type_outer
       405 |           u8 wait_type_inner
       406 |           u8 lock_type
       408 |           int cpu
       412 |           unsigned long ip
       416 |   atomic_t ip6_rt_gc_expire
       416 |     int counter
       420 |   unsigned long ip6_rt_last_gc
       424 |   unsigned char flowlabel_has_excl
       428 |   struct sock * ndisc_sk
       432 |   struct sock * tcp_sk
       436 |   struct sock * igmp_sk
       440 |   struct sock * mc_autojoin_sk
       444 |   struct hlist_head * inet6_addr_lst
       448 |   struct spinlock addrconf_hash_lock
       448 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       448 |       struct raw_spinlock rlock
       448 |         arch_spinlock_t raw_lock
       448 |           volatile unsigned int slock
       452 |         unsigned int magic
       456 |         unsigned int owner_cpu
       460 |         void * owner
       464 |         struct lockdep_map dep_map
       464 |           struct lock_class_key * key
       468 |           struct lock_class *[2] class_cache
       476 |           const char * name
       480 |           u8 wait_type_outer
       481 |           u8 wait_type_inner
       482 |           u8 lock_type
       484 |           int cpu
       488 |           unsigned long ip
       448 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       448 |         u8[16] __padding
       464 |         struct lockdep_map dep_map
       464 |           struct lock_class_key * key
       468 |           struct lock_class *[2] class_cache
       476 |           const char * name
       480 |           u8 wait_type_outer
       481 |           u8 wait_type_inner
       482 |           u8 lock_type
       484 |           int cpu
       488 |           unsigned long ip
       492 |   struct delayed_work addr_chk_work
       492 |     struct work_struct work
       492 |       atomic_t data
       492 |         int counter
       496 |       struct list_head entry
       496 |         struct list_head * next
       500 |         struct list_head * prev
       504 |       work_func_t func
       508 |       struct lockdep_map lockdep_map
       508 |         struct lock_class_key * key
       512 |         struct lock_class *[2] class_cache
       520 |         const char * name
       524 |         u8 wait_type_outer
       525 |         u8 wait_type_inner
       526 |         u8 lock_type
       528 |         int cpu
       532 |         unsigned long ip
       536 |     struct timer_list timer
       536 |       struct hlist_node entry
       536 |         struct hlist_node * next
       540 |         struct hlist_node ** pprev
       544 |       unsigned long expires
       548 |       void (*)(struct timer_list *) function
       552 |       u32 flags
       556 |       struct lockdep_map lockdep_map
       556 |         struct lock_class_key * key
       560 |         struct lock_class *[2] class_cache
       568 |         const char * name
       572 |         u8 wait_type_outer
       573 |         u8 wait_type_inner
       574 |         u8 lock_type
       576 |         int cpu
       580 |         unsigned long ip
       584 |     struct workqueue_struct * wq
       588 |     int cpu
       592 |   atomic_t dev_addr_genid
       592 |     int counter
       596 |   atomic_t fib6_sernum
       596 |     int counter
       600 |   struct seg6_pernet_data * seg6_data
       604 |   struct fib_notifier_ops * notifier_ops
       608 |   struct fib_notifier_ops * ip6mr_notifier_ops
       612 |   unsigned int ipmr_seq
       616 |   struct netns_ipv6::(unnamed at ../include/net/netns/ipv6.h:116:2) ip6addrlbl_table
       616 |     struct hlist_head head
       616 |       struct hlist_node * first
       620 |     struct spinlock lock
       620 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       620 |         struct raw_spinlock rlock
       620 |           arch_spinlock_t raw_lock
       620 |             volatile unsigned int slock
       624 |           unsigned int magic
       628 |           unsigned int owner_cpu
       632 |           void * owner
       636 |           struct lockdep_map dep_map
       636 |             struct lock_class_key * key
       640 |             struct lock_class *[2] class_cache
       648 |             const char * name
       652 |             u8 wait_type_outer
       653 |             u8 wait_type_inner
       654 |             u8 lock_type
       656 |             int cpu
       660 |             unsigned long ip
       620 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       620 |           u8[16] __padding
       636 |           struct lockdep_map dep_map
       636 |             struct lock_class_key * key
       640 |             struct lock_class *[2] class_cache
       648 |             const char * name
       652 |             u8 wait_type_outer
       653 |             u8 wait_type_inner
       654 |             u8 lock_type
       656 |             int cpu
       660 |             unsigned long ip
       664 |     u32 seq
       668 |   struct ioam6_pernet_data * ioam6_data
           | [sizeof=672, align=8]

*** Dumping AST Record Layout
         0 | struct netns_sctp
         0 |   typeof(struct sctp_mib) * sctp_statistics
         4 |   struct proc_dir_entry * proc_net_sctp
         8 |   struct ctl_table_header * sysctl_header
        12 |   struct sock * ctl_sock
        16 |   struct sock * udp4_sock
        20 |   struct sock * udp6_sock
        24 |   int udp_port
        28 |   int encap_port
        32 |   struct list_head local_addr_list
        32 |     struct list_head * next
        36 |     struct list_head * prev
        40 |   struct list_head addr_waitq
        40 |     struct list_head * next
        44 |     struct list_head * prev
        48 |   struct timer_list addr_wq_timer
        48 |     struct hlist_node entry
        48 |       struct hlist_node * next
        52 |       struct hlist_node ** pprev
        56 |     unsigned long expires
        60 |     void (*)(struct timer_list *) function
        64 |     u32 flags
        68 |     struct lockdep_map lockdep_map
        68 |       struct lock_class_key * key
        72 |       struct lock_class *[2] class_cache
        80 |       const char * name
        84 |       u8 wait_type_outer
        85 |       u8 wait_type_inner
        86 |       u8 lock_type
        88 |       int cpu
        92 |       unsigned long ip
        96 |   struct list_head auto_asconf_splist
        96 |     struct list_head * next
       100 |     struct list_head * prev
       104 |   struct spinlock addr_wq_lock
       104 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       104 |       struct raw_spinlock rlock
       104 |         arch_spinlock_t raw_lock
       104 |           volatile unsigned int slock
       108 |         unsigned int magic
       112 |         unsigned int owner_cpu
       116 |         void * owner
       120 |         struct lockdep_map dep_map
       120 |           struct lock_class_key * key
       124 |           struct lock_class *[2] class_cache
       132 |           const char * name
       136 |           u8 wait_type_outer
       137 |           u8 wait_type_inner
       138 |           u8 lock_type
       140 |           int cpu
       144 |           unsigned long ip
       104 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       104 |         u8[16] __padding
       120 |         struct lockdep_map dep_map
       120 |           struct lock_class_key * key
       124 |           struct lock_class *[2] class_cache
       132 |           const char * name
       136 |           u8 wait_type_outer
       137 |           u8 wait_type_inner
       138 |           u8 lock_type
       140 |           int cpu
       144 |           unsigned long ip
       148 |   struct spinlock local_addr_lock
       148 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       148 |       struct raw_spinlock rlock
       148 |         arch_spinlock_t raw_lock
       148 |           volatile unsigned int slock
       152 |         unsigned int magic
       156 |         unsigned int owner_cpu
       160 |         void * owner
       164 |         struct lockdep_map dep_map
       164 |           struct lock_class_key * key
       168 |           struct lock_class *[2] class_cache
       176 |           const char * name
       180 |           u8 wait_type_outer
       181 |           u8 wait_type_inner
       182 |           u8 lock_type
       184 |           int cpu
       188 |           unsigned long ip
       148 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       148 |         u8[16] __padding
       164 |         struct lockdep_map dep_map
       164 |           struct lock_class_key * key
       168 |           struct lock_class *[2] class_cache
       176 |           const char * name
       180 |           u8 wait_type_outer
       181 |           u8 wait_type_inner
       182 |           u8 lock_type
       184 |           int cpu
       188 |           unsigned long ip
       192 |   unsigned int rto_initial
       196 |   unsigned int rto_min
       200 |   unsigned int rto_max
       204 |   int rto_alpha
       208 |   int rto_beta
       212 |   int max_burst
       216 |   int cookie_preserve_enable
       220 |   char * sctp_hmac_alg
       224 |   unsigned int valid_cookie_life
       228 |   unsigned int sack_timeout
       232 |   unsigned int hb_interval
       236 |   unsigned int probe_interval
       240 |   int max_retrans_association
       244 |   int max_retrans_path
       248 |   int max_retrans_init
       252 |   int pf_retrans
       256 |   int ps_retrans
       260 |   int pf_enable
       264 |   int pf_expose
       268 |   int sndbuf_policy
       272 |   int rcvbuf_policy
       276 |   int default_auto_asconf
       280 |   int addip_enable
       284 |   int addip_noauth
       288 |   int prsctp_enable
       292 |   int reconf_enable
       296 |   int auth_enable
       300 |   int intl_enable
       304 |   int ecn_enable
       308 |   int scope_policy
       312 |   int rwnd_upd_shift
       316 |   unsigned long max_autoclose
           | [sizeof=320, align=4]

*** Dumping AST Record Layout
         0 | struct netns_bpf
         0 |   struct bpf_prog_array *[2] run_array
         8 |   struct bpf_prog *[2] progs
        16 |   struct list_head[2] links
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct xfrm_policy_hthresh
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
        44 |   seqlock_t lock
        44 |     struct seqcount_spinlock seqcount
        44 |       struct seqcount seqcount
        44 |         unsigned int sequence
        48 |         struct lockdep_map dep_map
        48 |           struct lock_class_key * key
        52 |           struct lock_class *[2] class_cache
        60 |           const char * name
        64 |           u8 wait_type_outer
        65 |           u8 wait_type_inner
        66 |           u8 lock_type
        68 |           int cpu
        72 |           unsigned long ip
        76 |       spinlock_t * lock
        80 |     struct spinlock lock
        80 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        80 |         struct raw_spinlock rlock
        80 |           arch_spinlock_t raw_lock
        80 |             volatile unsigned int slock
        84 |           unsigned int magic
        88 |           unsigned int owner_cpu
        92 |           void * owner
        96 |           struct lockdep_map dep_map
        96 |             struct lock_class_key * key
       100 |             struct lock_class *[2] class_cache
       108 |             const char * name
       112 |             u8 wait_type_outer
       113 |             u8 wait_type_inner
       114 |             u8 lock_type
       116 |             int cpu
       120 |             unsigned long ip
        80 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        80 |           u8[16] __padding
        96 |           struct lockdep_map dep_map
        96 |             struct lock_class_key * key
       100 |             struct lock_class *[2] class_cache
       108 |             const char * name
       112 |             u8 wait_type_outer
       113 |             u8 wait_type_inner
       114 |             u8 lock_type
       116 |             int cpu
       120 |             unsigned long ip
       124 |   u8 lbits4
       125 |   u8 rbits4
       126 |   u8 lbits6
       127 |   u8 rbits6
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct netns_xfrm
         0 |   struct list_head state_all
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct hlist_head * state_bydst
        12 |   struct hlist_head * state_bysrc
        16 |   struct hlist_head * state_byspi
        20 |   struct hlist_head * state_byseq
        24 |   unsigned int state_hmask
        28 |   unsigned int state_num
        32 |   struct work_struct state_hash_work
        32 |     atomic_t data
        32 |       int counter
        36 |     struct list_head entry
        36 |       struct list_head * next
        40 |       struct list_head * prev
        44 |     work_func_t func
        48 |     struct lockdep_map lockdep_map
        48 |       struct lock_class_key * key
        52 |       struct lock_class *[2] class_cache
        60 |       const char * name
        64 |       u8 wait_type_outer
        65 |       u8 wait_type_inner
        66 |       u8 lock_type
        68 |       int cpu
        72 |       unsigned long ip
        76 |   struct list_head policy_all
        76 |     struct list_head * next
        80 |     struct list_head * prev
        84 |   struct hlist_head * policy_byidx
        88 |   unsigned int policy_idx_hmask
        92 |   unsigned int idx_generator
        96 |   struct hlist_head[3] policy_inexact
       108 |   struct xfrm_policy_hash[3] policy_bydst
       144 |   unsigned int[6] policy_count
       168 |   struct work_struct policy_hash_work
       168 |     atomic_t data
       168 |       int counter
       172 |     struct list_head entry
       172 |       struct list_head * next
       176 |       struct list_head * prev
       180 |     work_func_t func
       184 |     struct lockdep_map lockdep_map
       184 |       struct lock_class_key * key
       188 |       struct lock_class *[2] class_cache
       196 |       const char * name
       200 |       u8 wait_type_outer
       201 |       u8 wait_type_inner
       202 |       u8 lock_type
       204 |       int cpu
       208 |       unsigned long ip
       212 |   struct xfrm_policy_hthresh policy_hthresh
       212 |     struct work_struct work
       212 |       atomic_t data
       212 |         int counter
       216 |       struct list_head entry
       216 |         struct list_head * next
       220 |         struct list_head * prev
       224 |       work_func_t func
       228 |       struct lockdep_map lockdep_map
       228 |         struct lock_class_key * key
       232 |         struct lock_class *[2] class_cache
       240 |         const char * name
       244 |         u8 wait_type_outer
       245 |         u8 wait_type_inner
       246 |         u8 lock_type
       248 |         int cpu
       252 |         unsigned long ip
       256 |     seqlock_t lock
       256 |       struct seqcount_spinlock seqcount
       256 |         struct seqcount seqcount
       256 |           unsigned int sequence
       260 |           struct lockdep_map dep_map
       260 |             struct lock_class_key * key
       264 |             struct lock_class *[2] class_cache
       272 |             const char * name
       276 |             u8 wait_type_outer
       277 |             u8 wait_type_inner
       278 |             u8 lock_type
       280 |             int cpu
       284 |             unsigned long ip
       288 |         spinlock_t * lock
       292 |       struct spinlock lock
       292 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       292 |           struct raw_spinlock rlock
       292 |             arch_spinlock_t raw_lock
       292 |               volatile unsigned int slock
       296 |             unsigned int magic
       300 |             unsigned int owner_cpu
       304 |             void * owner
       308 |             struct lockdep_map dep_map
       308 |               struct lock_class_key * key
       312 |               struct lock_class *[2] class_cache
       320 |               const char * name
       324 |               u8 wait_type_outer
       325 |               u8 wait_type_inner
       326 |               u8 lock_type
       328 |               int cpu
       332 |               unsigned long ip
       292 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       292 |             u8[16] __padding
       308 |             struct lockdep_map dep_map
       308 |               struct lock_class_key * key
       312 |               struct lock_class *[2] class_cache
       320 |               const char * name
       324 |               u8 wait_type_outer
       325 |               u8 wait_type_inner
       326 |               u8 lock_type
       328 |               int cpu
       332 |               unsigned long ip
       336 |     u8 lbits4
       337 |     u8 rbits4
       338 |     u8 lbits6
       339 |     u8 rbits6
       340 |   struct list_head inexact_bins
       340 |     struct list_head * next
       344 |     struct list_head * prev
       348 |   struct sock * nlsk
       352 |   struct sock * nlsk_stash
       356 |   u32 sysctl_aevent_etime
       360 |   u32 sysctl_aevent_rseqth
       364 |   int sysctl_larval_drop
       368 |   u32 sysctl_acq_expires
       372 |   u8[3] policy_default
       376 |   struct ctl_table_header * sysctl_hdr
       384 |   struct dst_ops xfrm4_dst_ops
       384 |     unsigned short family
       388 |     unsigned int gc_thresh
       392 |     void (*)(struct dst_ops *) gc
       396 |     struct dst_entry *(*)(struct dst_entry *, __u32) check
       400 |     unsigned int (*)(const struct dst_entry *) default_advmss
       404 |     unsigned int (*)(const struct dst_entry *) mtu
       408 |     u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
       412 |     void (*)(struct dst_entry *) destroy
       416 |     void (*)(struct dst_entry *, struct net_device *) ifdown
       420 |     void (*)(struct sock *, struct dst_entry *) negative_advice
       424 |     void (*)(struct sk_buff *) link_failure
       428 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
       432 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
       436 |     int (*)(struct net *, struct sock *, struct sk_buff *) local_out
       440 |     struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
       444 |     void (*)(const struct dst_entry *, const void *) confirm_neigh
       448 |     struct kmem_cache * kmem_cachep
       456 |     struct percpu_counter pcpuc_entries
       456 |       s64 count
       464 |   struct dst_ops xfrm6_dst_ops
       464 |     unsigned short family
       468 |     unsigned int gc_thresh
       472 |     void (*)(struct dst_ops *) gc
       476 |     struct dst_entry *(*)(struct dst_entry *, __u32) check
       480 |     unsigned int (*)(const struct dst_entry *) default_advmss
       484 |     unsigned int (*)(const struct dst_entry *) mtu
       488 |     u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
       492 |     void (*)(struct dst_entry *) destroy
       496 |     void (*)(struct dst_entry *, struct net_device *) ifdown
       500 |     void (*)(struct sock *, struct dst_entry *) negative_advice
       504 |     void (*)(struct sk_buff *) link_failure
       508 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
       512 |     void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
       516 |     int (*)(struct net *, struct sock *, struct sk_buff *) local_out
       520 |     struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
       524 |     void (*)(const struct dst_entry *, const void *) confirm_neigh
       528 |     struct kmem_cache * kmem_cachep
       536 |     struct percpu_counter pcpuc_entries
       536 |       s64 count
       544 |   struct spinlock xfrm_state_lock
       544 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       544 |       struct raw_spinlock rlock
       544 |         arch_spinlock_t raw_lock
       544 |           volatile unsigned int slock
       548 |         unsigned int magic
       552 |         unsigned int owner_cpu
       556 |         void * owner
       560 |         struct lockdep_map dep_map
       560 |           struct lock_class_key * key
       564 |           struct lock_class *[2] class_cache
       572 |           const char * name
       576 |           u8 wait_type_outer
       577 |           u8 wait_type_inner
       578 |           u8 lock_type
       580 |           int cpu
       584 |           unsigned long ip
       544 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       544 |         u8[16] __padding
       560 |         struct lockdep_map dep_map
       560 |           struct lock_class_key * key
       564 |           struct lock_class *[2] class_cache
       572 |           const char * name
       576 |           u8 wait_type_outer
       577 |           u8 wait_type_inner
       578 |           u8 lock_type
       580 |           int cpu
       584 |           unsigned long ip
       588 |   struct seqcount_spinlock xfrm_state_hash_generation
       588 |     struct seqcount seqcount
       588 |       unsigned int sequence
       592 |       struct lockdep_map dep_map
       592 |         struct lock_class_key * key
       596 |         struct lock_class *[2] class_cache
       604 |         const char * name
       608 |         u8 wait_type_outer
       609 |         u8 wait_type_inner
       610 |         u8 lock_type
       612 |         int cpu
       616 |         unsigned long ip
       620 |     spinlock_t * lock
       624 |   struct seqcount_spinlock xfrm_policy_hash_generation
       624 |     struct seqcount seqcount
       624 |       unsigned int sequence
       628 |       struct lockdep_map dep_map
       628 |         struct lock_class_key * key
       632 |         struct lock_class *[2] class_cache
       640 |         const char * name
       644 |         u8 wait_type_outer
       645 |         u8 wait_type_inner
       646 |         u8 lock_type
       648 |         int cpu
       652 |         unsigned long ip
       656 |     spinlock_t * lock
       660 |   struct spinlock xfrm_policy_lock
       660 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       660 |       struct raw_spinlock rlock
       660 |         arch_spinlock_t raw_lock
       660 |           volatile unsigned int slock
       664 |         unsigned int magic
       668 |         unsigned int owner_cpu
       672 |         void * owner
       676 |         struct lockdep_map dep_map
       676 |           struct lock_class_key * key
       680 |           struct lock_class *[2] class_cache
       688 |           const char * name
       692 |           u8 wait_type_outer
       693 |           u8 wait_type_inner
       694 |           u8 lock_type
       696 |           int cpu
       700 |           unsigned long ip
       660 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       660 |         u8[16] __padding
       676 |         struct lockdep_map dep_map
       676 |           struct lock_class_key * key
       680 |           struct lock_class *[2] class_cache
       688 |           const char * name
       692 |           u8 wait_type_outer
       693 |           u8 wait_type_inner
       694 |           u8 lock_type
       696 |           int cpu
       700 |           unsigned long ip
       704 |   struct mutex xfrm_cfg_mutex
       704 |     atomic_t owner
       704 |       int counter
       708 |     struct raw_spinlock wait_lock
       708 |       arch_spinlock_t raw_lock
       708 |         volatile unsigned int slock
       712 |       unsigned int magic
       716 |       unsigned int owner_cpu
       720 |       void * owner
       724 |       struct lockdep_map dep_map
       724 |         struct lock_class_key * key
       728 |         struct lock_class *[2] class_cache
       736 |         const char * name
       740 |         u8 wait_type_outer
       741 |         u8 wait_type_inner
       742 |         u8 lock_type
       744 |         int cpu
       748 |         unsigned long ip
       752 |     struct list_head wait_list
       752 |       struct list_head * next
       756 |       struct list_head * prev
       760 |     void * magic
       764 |     struct lockdep_map dep_map
       764 |       struct lock_class_key * key
       768 |       struct lock_class *[2] class_cache
       776 |       const char * name
       780 |       u8 wait_type_outer
       781 |       u8 wait_type_inner
       782 |       u8 lock_type
       784 |       int cpu
       788 |       unsigned long ip
       792 |   struct delayed_work nat_keepalive_work
       792 |     struct work_struct work
       792 |       atomic_t data
       792 |         int counter
       796 |       struct list_head entry
       796 |         struct list_head * next
       800 |         struct list_head * prev
       804 |       work_func_t func
       808 |       struct lockdep_map lockdep_map
       808 |         struct lock_class_key * key
       812 |         struct lock_class *[2] class_cache
       820 |         const char * name
       824 |         u8 wait_type_outer
       825 |         u8 wait_type_inner
       826 |         u8 lock_type
       828 |         int cpu
       832 |         unsigned long ip
       836 |     struct timer_list timer
       836 |       struct hlist_node entry
       836 |         struct hlist_node * next
       840 |         struct hlist_node ** pprev
       844 |       unsigned long expires
       848 |       void (*)(struct timer_list *) function
       852 |       u32 flags
       856 |       struct lockdep_map lockdep_map
       856 |         struct lock_class_key * key
       860 |         struct lock_class *[2] class_cache
       868 |         const char * name
       872 |         u8 wait_type_outer
       873 |         u8 wait_type_inner
       874 |         u8 lock_type
       876 |         int cpu
       880 |         unsigned long ip
       884 |     struct workqueue_struct * wq
       888 |     int cpu
           | [sizeof=896, align=8]

*** Dumping AST Record Layout
         0 | struct netns_mpls
         0 |   int ip_ttl_propagate
         4 |   int default_ttl
         8 |   size_t platform_labels
        12 |   struct mpls_route ** platform_label
        16 |   struct ctl_table_header * ctl
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct netns_can
         0 |   struct proc_dir_entry * proc_dir
         4 |   struct proc_dir_entry * pde_stats
         8 |   struct proc_dir_entry * pde_reset_stats
        12 |   struct proc_dir_entry * pde_rcvlist_all
        16 |   struct proc_dir_entry * pde_rcvlist_fil
        20 |   struct proc_dir_entry * pde_rcvlist_inv
        24 |   struct proc_dir_entry * pde_rcvlist_sff
        28 |   struct proc_dir_entry * pde_rcvlist_eff
        32 |   struct proc_dir_entry * pde_rcvlist_err
        36 |   struct proc_dir_entry * bcmproc_dir
        40 |   struct can_dev_rcv_lists * rx_alldev_list
        44 |   struct spinlock rcvlists_lock
        44 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        44 |       struct raw_spinlock rlock
        44 |         arch_spinlock_t raw_lock
        44 |           volatile unsigned int slock
        48 |         unsigned int magic
        52 |         unsigned int owner_cpu
        56 |         void * owner
        60 |         struct lockdep_map dep_map
        60 |           struct lock_class_key * key
        64 |           struct lock_class *[2] class_cache
        72 |           const char * name
        76 |           u8 wait_type_outer
        77 |           u8 wait_type_inner
        78 |           u8 lock_type
        80 |           int cpu
        84 |           unsigned long ip
        44 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        44 |         u8[16] __padding
        60 |         struct lockdep_map dep_map
        60 |           struct lock_class_key * key
        64 |           struct lock_class *[2] class_cache
        72 |           const char * name
        76 |           u8 wait_type_outer
        77 |           u8 wait_type_inner
        78 |           u8 lock_type
        80 |           int cpu
        84 |           unsigned long ip
        88 |   struct timer_list stattimer
        88 |     struct hlist_node entry
        88 |       struct hlist_node * next
        92 |       struct hlist_node ** pprev
        96 |     unsigned long expires
       100 |     void (*)(struct timer_list *) function
       104 |     u32 flags
       108 |     struct lockdep_map lockdep_map
       108 |       struct lock_class_key * key
       112 |       struct lock_class *[2] class_cache
       120 |       const char * name
       124 |       u8 wait_type_outer
       125 |       u8 wait_type_inner
       126 |       u8 lock_type
       128 |       int cpu
       132 |       unsigned long ip
       136 |   struct can_pkg_stats * pkg_stats
       140 |   struct can_rcv_lists_stats * rcv_lists_stats
       144 |   struct hlist_head cgw_list
       144 |     struct hlist_node * first
           | [sizeof=148, align=4]

*** Dumping AST Record Layout
         0 | struct netns_xdp
         0 |   struct mutex lock
         0 |     atomic_t owner
         0 |       int counter
         4 |     struct raw_spinlock wait_lock
         4 |       arch_spinlock_t raw_lock
         4 |         volatile unsigned int slock
         8 |       unsigned int magic
        12 |       unsigned int owner_cpu
        16 |       void * owner
        20 |       struct lockdep_map dep_map
        20 |         struct lock_class_key * key
        24 |         struct lock_class *[2] class_cache
        32 |         const char * name
        36 |         u8 wait_type_outer
        37 |         u8 wait_type_inner
        38 |         u8 lock_type
        40 |         int cpu
        44 |         unsigned long ip
        48 |     struct list_head wait_list
        48 |       struct list_head * next
        52 |       struct list_head * prev
        56 |     void * magic
        60 |     struct lockdep_map dep_map
        60 |       struct lock_class_key * key
        64 |       struct lock_class *[2] class_cache
        72 |       const char * name
        76 |       u8 wait_type_outer
        77 |       u8 wait_type_inner
        78 |       u8 lock_type
        80 |       int cpu
        84 |       unsigned long ip
        88 |   struct hlist_head list
        88 |     struct hlist_node * first
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct netns_mctp
         0 |   struct list_head routes
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct mutex bind_lock
         8 |     atomic_t owner
         8 |       int counter
        12 |     struct raw_spinlock wait_lock
        12 |       arch_spinlock_t raw_lock
        12 |         volatile unsigned int slock
        16 |       unsigned int magic
        20 |       unsigned int owner_cpu
        24 |       void * owner
        28 |       struct lockdep_map dep_map
        28 |         struct lock_class_key * key
        32 |         struct lock_class *[2] class_cache
        40 |         const char * name
        44 |         u8 wait_type_outer
        45 |         u8 wait_type_inner
        46 |         u8 lock_type
        48 |         int cpu
        52 |         unsigned long ip
        56 |     struct list_head wait_list
        56 |       struct list_head * next
        60 |       struct list_head * prev
        64 |     void * magic
        68 |     struct lockdep_map dep_map
        68 |       struct lock_class_key * key
        72 |       struct lock_class *[2] class_cache
        80 |       const char * name
        84 |       u8 wait_type_outer
        85 |       u8 wait_type_inner
        86 |       u8 lock_type
        88 |       int cpu
        92 |       unsigned long ip
        96 |   struct hlist_head binds
        96 |     struct hlist_node * first
       100 |   struct spinlock keys_lock
       100 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       100 |       struct raw_spinlock rlock
       100 |         arch_spinlock_t raw_lock
       100 |           volatile unsigned int slock
       104 |         unsigned int magic
       108 |         unsigned int owner_cpu
       112 |         void * owner
       116 |         struct lockdep_map dep_map
       116 |           struct lock_class_key * key
       120 |           struct lock_class *[2] class_cache
       128 |           const char * name
       132 |           u8 wait_type_outer
       133 |           u8 wait_type_inner
       134 |           u8 lock_type
       136 |           int cpu
       140 |           unsigned long ip
       100 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       100 |         u8[16] __padding
       116 |         struct lockdep_map dep_map
       116 |           struct lock_class_key * key
       120 |           struct lock_class *[2] class_cache
       128 |           const char * name
       132 |           u8 wait_type_outer
       133 |           u8 wait_type_inner
       134 |           u8 lock_type
       136 |           int cpu
       140 |           unsigned long ip
       144 |   struct hlist_head keys
       144 |     struct hlist_node * first
       148 |   unsigned int default_net
       152 |   struct mutex neigh_lock
       152 |     atomic_t owner
       152 |       int counter
       156 |     struct raw_spinlock wait_lock
       156 |       arch_spinlock_t raw_lock
       156 |         volatile unsigned int slock
       160 |       unsigned int magic
       164 |       unsigned int owner_cpu
       168 |       void * owner
       172 |       struct lockdep_map dep_map
       172 |         struct lock_class_key * key
       176 |         struct lock_class *[2] class_cache
       184 |         const char * name
       188 |         u8 wait_type_outer
       189 |         u8 wait_type_inner
       190 |         u8 lock_type
       192 |         int cpu
       196 |         unsigned long ip
       200 |     struct list_head wait_list
       200 |       struct list_head * next
       204 |       struct list_head * prev
       208 |     void * magic
       212 |     struct lockdep_map dep_map
       212 |       struct lock_class_key * key
       216 |       struct lock_class *[2] class_cache
       224 |       const char * name
       228 |       u8 wait_type_outer
       229 |       u8 wait_type_inner
       230 |       u8 lock_type
       232 |       int cpu
       236 |       unsigned long ip
       240 |   struct list_head neighbours
       240 |     struct list_head * next
       244 |     struct list_head * prev
           | [sizeof=248, align=4]

*** Dumping AST Record Layout
         0 | struct netns_smc
         0 |   struct smc_stats * smc_stats
         4 |   struct mutex mutex_fback_rsn
         4 |     atomic_t owner
         4 |       int counter
         8 |     struct raw_spinlock wait_lock
         8 |       arch_spinlock_t raw_lock
         8 |         volatile unsigned int slock
        12 |       unsigned int magic
        16 |       unsigned int owner_cpu
        20 |       void * owner
        24 |       struct lockdep_map dep_map
        24 |         struct lock_class_key * key
        28 |         struct lock_class *[2] class_cache
        36 |         const char * name
        40 |         u8 wait_type_outer
        41 |         u8 wait_type_inner
        42 |         u8 lock_type
        44 |         int cpu
        48 |         unsigned long ip
        52 |     struct list_head wait_list
        52 |       struct list_head * next
        56 |       struct list_head * prev
        60 |     void * magic
        64 |     struct lockdep_map dep_map
        64 |       struct lock_class_key * key
        68 |       struct lock_class *[2] class_cache
        76 |       const char * name
        80 |       u8 wait_type_outer
        81 |       u8 wait_type_inner
        82 |       u8 lock_type
        84 |       int cpu
        88 |       unsigned long ip
        92 |   struct smc_stats_rsn * fback_rsn
        96 |   bool limit_smc_hs
       100 |   struct ctl_table_header * smc_hdr
       104 |   unsigned int sysctl_autocorking_size
       108 |   unsigned int sysctl_smcr_buf_type
       112 |   int sysctl_smcr_testlink_time
       116 |   int sysctl_wmem
       120 |   int sysctl_rmem
       124 |   int sysctl_max_links_per_lgr
       128 |   int sysctl_max_conns_per_lgr
           | [sizeof=132, align=4]

*** Dumping AST Record Layout
         0 | struct net
         0 |   struct refcount_struct passive
         0 |     atomic_t refs
         0 |       int counter
         4 |   struct spinlock rules_mod_lock
         4 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         4 |       struct raw_spinlock rlock
         4 |         arch_spinlock_t raw_lock
         4 |           volatile unsigned int slock
         8 |         unsigned int magic
        12 |         unsigned int owner_cpu
        16 |         void * owner
        20 |         struct lockdep_map dep_map
        20 |           struct lock_class_key * key
        24 |           struct lock_class *[2] class_cache
        32 |           const char * name
        36 |           u8 wait_type_outer
        37 |           u8 wait_type_inner
        38 |           u8 lock_type
        40 |           int cpu
        44 |           unsigned long ip
         4 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         4 |         u8[16] __padding
        20 |         struct lockdep_map dep_map
        20 |           struct lock_class_key * key
        24 |           struct lock_class *[2] class_cache
        32 |           const char * name
        36 |           u8 wait_type_outer
        37 |           u8 wait_type_inner
        38 |           u8 lock_type
        40 |           int cpu
        44 |           unsigned long ip
        48 |   unsigned int dev_base_seq
        52 |   u32 ifindex
        56 |   struct spinlock nsid_lock
        56 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        56 |       struct raw_spinlock rlock
        56 |         arch_spinlock_t raw_lock
        56 |           volatile unsigned int slock
        60 |         unsigned int magic
        64 |         unsigned int owner_cpu
        68 |         void * owner
        72 |         struct lockdep_map dep_map
        72 |           struct lock_class_key * key
        76 |           struct lock_class *[2] class_cache
        84 |           const char * name
        88 |           u8 wait_type_outer
        89 |           u8 wait_type_inner
        90 |           u8 lock_type
        92 |           int cpu
        96 |           unsigned long ip
        56 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        56 |         u8[16] __padding
        72 |         struct lockdep_map dep_map
        72 |           struct lock_class_key * key
        76 |           struct lock_class *[2] class_cache
        84 |           const char * name
        88 |           u8 wait_type_outer
        89 |           u8 wait_type_inner
        90 |           u8 lock_type
        92 |           int cpu
        96 |           unsigned long ip
       100 |   atomic_t fnhe_genid
       100 |     int counter
       104 |   struct list_head list
       104 |     struct list_head * next
       108 |     struct list_head * prev
       112 |   struct list_head exit_list
       112 |     struct list_head * next
       116 |     struct list_head * prev
       120 |   struct llist_node cleanup_list
       120 |     struct llist_node * next
       124 |   struct key_tag * key_domain
       128 |   struct user_namespace * user_ns
       132 |   struct ucounts * ucounts
       136 |   struct idr netns_ids
       136 |     struct xarray idr_rt
       136 |       struct spinlock xa_lock
       136 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       136 |           struct raw_spinlock rlock
       136 |             arch_spinlock_t raw_lock
       136 |               volatile unsigned int slock
       140 |             unsigned int magic
       144 |             unsigned int owner_cpu
       148 |             void * owner
       152 |             struct lockdep_map dep_map
       152 |               struct lock_class_key * key
       156 |               struct lock_class *[2] class_cache
       164 |               const char * name
       168 |               u8 wait_type_outer
       169 |               u8 wait_type_inner
       170 |               u8 lock_type
       172 |               int cpu
       176 |               unsigned long ip
       136 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       136 |             u8[16] __padding
       152 |             struct lockdep_map dep_map
       152 |               struct lock_class_key * key
       156 |               struct lock_class *[2] class_cache
       164 |               const char * name
       168 |               u8 wait_type_outer
       169 |               u8 wait_type_inner
       170 |               u8 lock_type
       172 |               int cpu
       176 |               unsigned long ip
       180 |       gfp_t xa_flags
       184 |       void * xa_head
       188 |     unsigned int idr_base
       192 |     unsigned int idr_next
       196 |   struct ns_common ns
       196 |     struct dentry * stashed
       200 |     const struct proc_ns_operations * ops
       204 |     unsigned int inum
       208 |     struct refcount_struct count
       208 |       atomic_t refs
       208 |         int counter
       212 |   struct ref_tracker_dir refcnt_tracker
       212 |     struct spinlock lock
       212 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       212 |         struct raw_spinlock rlock
       212 |           arch_spinlock_t raw_lock
       212 |             volatile unsigned int slock
       216 |           unsigned int magic
       220 |           unsigned int owner_cpu
       224 |           void * owner
       228 |           struct lockdep_map dep_map
       228 |             struct lock_class_key * key
       232 |             struct lock_class *[2] class_cache
       240 |             const char * name
       244 |             u8 wait_type_outer
       245 |             u8 wait_type_inner
       246 |             u8 lock_type
       248 |             int cpu
       252 |             unsigned long ip
       212 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       212 |           u8[16] __padding
       228 |           struct lockdep_map dep_map
       228 |             struct lock_class_key * key
       232 |             struct lock_class *[2] class_cache
       240 |             const char * name
       244 |             u8 wait_type_outer
       245 |             u8 wait_type_inner
       246 |             u8 lock_type
       248 |             int cpu
       252 |             unsigned long ip
       256 |     unsigned int quarantine_avail
       260 |     struct refcount_struct untracked
       260 |       atomic_t refs
       260 |         int counter
       264 |     struct refcount_struct no_tracker
       264 |       atomic_t refs
       264 |         int counter
       268 |     bool dead
       272 |     struct list_head list
       272 |       struct list_head * next
       276 |       struct list_head * prev
       280 |     struct list_head quarantine
       280 |       struct list_head * next
       284 |       struct list_head * prev
       288 |     char[32] name
       320 |   struct ref_tracker_dir notrefcnt_tracker
       320 |     struct spinlock lock
       320 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       320 |         struct raw_spinlock rlock
       320 |           arch_spinlock_t raw_lock
       320 |             volatile unsigned int slock
       324 |           unsigned int magic
       328 |           unsigned int owner_cpu
       332 |           void * owner
       336 |           struct lockdep_map dep_map
       336 |             struct lock_class_key * key
       340 |             struct lock_class *[2] class_cache
       348 |             const char * name
       352 |             u8 wait_type_outer
       353 |             u8 wait_type_inner
       354 |             u8 lock_type
       356 |             int cpu
       360 |             unsigned long ip
       320 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       320 |           u8[16] __padding
       336 |           struct lockdep_map dep_map
       336 |             struct lock_class_key * key
       340 |             struct lock_class *[2] class_cache
       348 |             const char * name
       352 |             u8 wait_type_outer
       353 |             u8 wait_type_inner
       354 |             u8 lock_type
       356 |             int cpu
       360 |             unsigned long ip
       364 |     unsigned int quarantine_avail
       368 |     struct refcount_struct untracked
       368 |       atomic_t refs
       368 |         int counter
       372 |     struct refcount_struct no_tracker
       372 |       atomic_t refs
       372 |         int counter
       376 |     bool dead
       380 |     struct list_head list
       380 |       struct list_head * next
       384 |       struct list_head * prev
       388 |     struct list_head quarantine
       388 |       struct list_head * next
       392 |       struct list_head * prev
       396 |     char[32] name
       428 |   struct list_head dev_base_head
       428 |     struct list_head * next
       432 |     struct list_head * prev
       436 |   struct proc_dir_entry * proc_net
       440 |   struct proc_dir_entry * proc_net_stat
       444 |   struct ctl_table_set sysctls
       444 |     int (*)(struct ctl_table_set *) is_seen
       448 |     struct ctl_dir dir
       448 |       struct ctl_table_header header
       448 |         union ctl_table_header::(anonymous at ../include/linux/sysctl.h:163:2) 
       448 |           struct ctl_table_header::(anonymous at ../include/linux/sysctl.h:164:3) 
       448 |             struct ctl_table * ctl_table
       452 |             int ctl_table_size
       456 |             int used
       460 |             int count
       464 |             int nreg
       448 |           struct callback_head rcu
       448 |             struct callback_head * next
       452 |             void (*)(struct callback_head *) func
       468 |         struct completion * unregistering
       472 |         const struct ctl_table * ctl_table_arg
       476 |         struct ctl_table_root * root
       480 |         struct ctl_table_set * set
       484 |         struct ctl_dir * parent
       488 |         struct ctl_node * node
       492 |         struct hlist_head inodes
       492 |           struct hlist_node * first
       496 |         enum (unnamed enum at ../include/linux/sysctl.h:187:2) type
       500 |       struct rb_root root
       500 |         struct rb_node * rb_node
       504 |   struct sock * rtnl
       508 |   struct sock * genl_sock
       512 |   struct uevent_sock * uevent_sock
       516 |   struct hlist_head * dev_name_head
       520 |   struct hlist_head * dev_index_head
       524 |   struct xarray dev_by_index
       524 |     struct spinlock xa_lock
       524 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       524 |         struct raw_spinlock rlock
       524 |           arch_spinlock_t raw_lock
       524 |             volatile unsigned int slock
       528 |           unsigned int magic
       532 |           unsigned int owner_cpu
       536 |           void * owner
       540 |           struct lockdep_map dep_map
       540 |             struct lock_class_key * key
       544 |             struct lock_class *[2] class_cache
       552 |             const char * name
       556 |             u8 wait_type_outer
       557 |             u8 wait_type_inner
       558 |             u8 lock_type
       560 |             int cpu
       564 |             unsigned long ip
       524 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       524 |           u8[16] __padding
       540 |           struct lockdep_map dep_map
       540 |             struct lock_class_key * key
       544 |             struct lock_class *[2] class_cache
       552 |             const char * name
       556 |             u8 wait_type_outer
       557 |             u8 wait_type_inner
       558 |             u8 lock_type
       560 |             int cpu
       564 |             unsigned long ip
       568 |     gfp_t xa_flags
       572 |     void * xa_head
       576 |   struct raw_notifier_head netdev_chain
       576 |     struct notifier_block * head
       580 |   u32 hash_mix
       584 |   struct net_device * loopback_dev
       588 |   struct list_head rules_ops
       588 |     struct list_head * next
       592 |     struct list_head * prev
       596 |   struct netns_core core
       596 |     struct ctl_table_header * sysctl_hdr
       600 |     int sysctl_somaxconn
       604 |     int sysctl_optmem_max
       608 |     u8 sysctl_txrehash
       612 |     struct prot_inuse * prot_inuse
       616 |   struct netns_mib mib
       616 |     typeof(struct ipstats_mib) * ip_statistics
       620 |     typeof(struct ipstats_mib) * ipv6_statistics
       624 |     typeof(struct tcp_mib) * tcp_statistics
       628 |     typeof(struct linux_mib) * net_statistics
       632 |     typeof(struct udp_mib) * udp_statistics
       636 |     typeof(struct udp_mib) * udp_stats_in6
       640 |     typeof(struct linux_tls_mib) * tls_statistics
       644 |     typeof(struct udp_mib) * udplite_statistics
       648 |     typeof(struct udp_mib) * udplite_stats_in6
       652 |     typeof(struct icmp_mib) * icmp_statistics
       656 |     typeof(struct icmpmsg_mib) * icmpmsg_statistics
       660 |     typeof(struct icmpv6_mib) * icmpv6_statistics
       664 |     typeof(struct icmpv6msg_mib) * icmpv6msg_statistics
       668 |     struct proc_dir_entry * proc_net_devsnmp6
       672 |   struct netns_packet packet
       672 |     struct mutex sklist_lock
       672 |       atomic_t owner
       672 |         int counter
       676 |       struct raw_spinlock wait_lock
       676 |         arch_spinlock_t raw_lock
       676 |           volatile unsigned int slock
       680 |         unsigned int magic
       684 |         unsigned int owner_cpu
       688 |         void * owner
       692 |         struct lockdep_map dep_map
       692 |           struct lock_class_key * key
       696 |           struct lock_class *[2] class_cache
       704 |           const char * name
       708 |           u8 wait_type_outer
       709 |           u8 wait_type_inner
       710 |           u8 lock_type
       712 |           int cpu
       716 |           unsigned long ip
       720 |       struct list_head wait_list
       720 |         struct list_head * next
       724 |         struct list_head * prev
       728 |       void * magic
       732 |       struct lockdep_map dep_map
       732 |         struct lock_class_key * key
       736 |         struct lock_class *[2] class_cache
       744 |         const char * name
       748 |         u8 wait_type_outer
       749 |         u8 wait_type_inner
       750 |         u8 lock_type
       752 |         int cpu
       756 |         unsigned long ip
       760 |     struct hlist_head sklist
       760 |       struct hlist_node * first
       764 |   struct netns_unix unx
       764 |     struct unix_table table
       764 |       spinlock_t * locks
       768 |       struct hlist_head * buckets
       772 |     int sysctl_max_dgram_qlen
       776 |     struct ctl_table_header * ctl
       780 |   struct netns_nexthop nexthop
       780 |     struct rb_root rb_root
       780 |       struct rb_node * rb_node
       784 |     struct hlist_head * devhash
       788 |     unsigned int seq
       792 |     u32 last_id_allocated
       796 |     struct blocking_notifier_head notifier_chain
       796 |       struct rw_semaphore rwsem
       796 |         atomic_t count
       796 |           int counter
       800 |         atomic_t owner
       800 |           int counter
       804 |         struct raw_spinlock wait_lock
       804 |           arch_spinlock_t raw_lock
       804 |             volatile unsigned int slock
       808 |           unsigned int magic
       812 |           unsigned int owner_cpu
       816 |           void * owner
       820 |           struct lockdep_map dep_map
       820 |             struct lock_class_key * key
       824 |             struct lock_class *[2] class_cache
       832 |             const char * name
       836 |             u8 wait_type_outer
       837 |             u8 wait_type_inner
       838 |             u8 lock_type
       840 |             int cpu
       844 |             unsigned long ip
       848 |         struct list_head wait_list
       848 |           struct list_head * next
       852 |           struct list_head * prev
       856 |         void * magic
       860 |         struct lockdep_map dep_map
       860 |           struct lock_class_key * key
       864 |           struct lock_class *[2] class_cache
       872 |           const char * name
       876 |           u8 wait_type_outer
       877 |           u8 wait_type_inner
       878 |           u8 lock_type
       880 |           int cpu
       884 |           unsigned long ip
       888 |       struct notifier_block * head
       896 |   struct netns_ipv4 ipv4
       896 |     __u8[0] __cacheline_group_begin__netns_ipv4_read_tx
       896 |     u8 sysctl_tcp_early_retrans
       897 |     u8 sysctl_tcp_tso_win_divisor
       898 |     u8 sysctl_tcp_tso_rtt_log
       899 |     u8 sysctl_tcp_autocorking
       900 |     int sysctl_tcp_min_snd_mss
       904 |     unsigned int sysctl_tcp_notsent_lowat
       908 |     int sysctl_tcp_limit_output_bytes
       912 |     int sysctl_tcp_min_rtt_wlen
       916 |     int[3] sysctl_tcp_wmem
       928 |     u8 sysctl_ip_fwd_use_pmtu
       929 |     __u8[0] __cacheline_group_end__netns_ipv4_read_tx
       929 |     __u8[0] __cacheline_group_begin__netns_ipv4_read_txrx
       929 |     u8 sysctl_tcp_moderate_rcvbuf
       930 |     __u8[0] __cacheline_group_end__netns_ipv4_read_txrx
       930 |     __u8[0] __cacheline_group_begin__netns_ipv4_read_rx
       930 |     u8 sysctl_ip_early_demux
       931 |     u8 sysctl_tcp_early_demux
       932 |     int sysctl_tcp_reordering
       936 |     int[3] sysctl_tcp_rmem
       948 |     __u8[0] __cacheline_group_end__netns_ipv4_read_rx
       948 |     struct inet_timewait_death_row tcp_death_row
       948 |       struct refcount_struct tw_refcount
       948 |         atomic_t refs
       948 |           int counter
       952 |       struct inet_hashinfo * hashinfo
       956 |       int sysctl_max_tw_buckets
       960 |     struct udp_table * udp_table
       964 |     struct ctl_table_header * forw_hdr
       968 |     struct ctl_table_header * frags_hdr
       972 |     struct ctl_table_header * ipv4_hdr
       976 |     struct ctl_table_header * route_hdr
       980 |     struct ctl_table_header * xfrm4_hdr
       984 |     struct ipv4_devconf * devconf_all
       988 |     struct ipv4_devconf * devconf_dflt
       992 |     struct ip_ra_chain * ra_chain
       996 |     struct mutex ra_mutex
       996 |       atomic_t owner
       996 |         int counter
      1000 |       struct raw_spinlock wait_lock
      1000 |         arch_spinlock_t raw_lock
      1000 |           volatile unsigned int slock
      1004 |         unsigned int magic
      1008 |         unsigned int owner_cpu
      1012 |         void * owner
      1016 |         struct lockdep_map dep_map
      1016 |           struct lock_class_key * key
      1020 |           struct lock_class *[2] class_cache
      1028 |           const char * name
      1032 |           u8 wait_type_outer
      1033 |           u8 wait_type_inner
      1034 |           u8 lock_type
      1036 |           int cpu
      1040 |           unsigned long ip
      1044 |       struct list_head wait_list
      1044 |         struct list_head * next
      1048 |         struct list_head * prev
      1052 |       void * magic
      1056 |       struct lockdep_map dep_map
      1056 |         struct lock_class_key * key
      1060 |         struct lock_class *[2] class_cache
      1068 |         const char * name
      1072 |         u8 wait_type_outer
      1073 |         u8 wait_type_inner
      1074 |         u8 lock_type
      1076 |         int cpu
      1080 |         unsigned long ip
      1084 |     bool fib_has_custom_local_routes
      1085 |     bool fib_offload_disabled
      1086 |     u8 sysctl_tcp_shrink_window
      1088 |     atomic_t fib_num_tclassid_users
      1088 |       int counter
      1092 |     struct hlist_head * fib_table_hash
      1096 |     struct sock * fibnl
      1100 |     struct sock * mc_autojoin_sk
      1104 |     struct inet_peer_base * peers
      1108 |     struct fqdir * fqdir
      1112 |     u8 sysctl_icmp_echo_ignore_all
      1113 |     u8 sysctl_icmp_echo_enable_probe
      1114 |     u8 sysctl_icmp_echo_ignore_broadcasts
      1115 |     u8 sysctl_icmp_ignore_bogus_error_responses
      1116 |     u8 sysctl_icmp_errors_use_inbound_ifaddr
      1120 |     int sysctl_icmp_ratelimit
      1124 |     int sysctl_icmp_ratemask
      1128 |     u32 ip_rt_min_pmtu
      1132 |     int ip_rt_mtu_expires
      1136 |     int ip_rt_min_advmss
      1140 |     struct local_ports ip_local_ports
      1140 |       u32 range
      1144 |       bool warned
      1148 |     u8 sysctl_tcp_ecn
      1149 |     u8 sysctl_tcp_ecn_fallback
      1150 |     u8 sysctl_ip_default_ttl
      1151 |     u8 sysctl_ip_no_pmtu_disc
      1152 |     u8 sysctl_ip_fwd_update_priority
      1153 |     u8 sysctl_ip_nonlocal_bind
      1154 |     u8 sysctl_ip_autobind_reuse
      1155 |     u8 sysctl_ip_dynaddr
      1156 |     u8 sysctl_udp_early_demux
      1157 |     u8 sysctl_nexthop_compat_mode
      1158 |     u8 sysctl_fwmark_reflect
      1159 |     u8 sysctl_tcp_fwmark_accept
      1160 |     u8 sysctl_tcp_mtu_probing
      1164 |     int sysctl_tcp_mtu_probe_floor
      1168 |     int sysctl_tcp_base_mss
      1172 |     int sysctl_tcp_probe_threshold
      1176 |     u32 sysctl_tcp_probe_interval
      1180 |     int sysctl_tcp_keepalive_time
      1184 |     int sysctl_tcp_keepalive_intvl
      1188 |     u8 sysctl_tcp_keepalive_probes
      1189 |     u8 sysctl_tcp_syn_retries
      1190 |     u8 sysctl_tcp_synack_retries
      1191 |     u8 sysctl_tcp_syncookies
      1192 |     u8 sysctl_tcp_migrate_req
      1193 |     u8 sysctl_tcp_comp_sack_nr
      1194 |     u8 sysctl_tcp_backlog_ack_defer
      1195 |     u8 sysctl_tcp_pingpong_thresh
      1196 |     u8 sysctl_tcp_retries1
      1197 |     u8 sysctl_tcp_retries2
      1198 |     u8 sysctl_tcp_orphan_retries
      1199 |     u8 sysctl_tcp_tw_reuse
      1200 |     int sysctl_tcp_fin_timeout
      1204 |     u8 sysctl_tcp_sack
      1205 |     u8 sysctl_tcp_window_scaling
      1206 |     u8 sysctl_tcp_timestamps
      1208 |     int sysctl_tcp_rto_min_us
      1212 |     u8 sysctl_tcp_recovery
      1213 |     u8 sysctl_tcp_thin_linear_timeouts
      1214 |     u8 sysctl_tcp_slow_start_after_idle
      1215 |     u8 sysctl_tcp_retrans_collapse
      1216 |     u8 sysctl_tcp_stdurg
      1217 |     u8 sysctl_tcp_rfc1337
      1218 |     u8 sysctl_tcp_abort_on_overflow
      1219 |     u8 sysctl_tcp_fack
      1220 |     int sysctl_tcp_max_reordering
      1224 |     int sysctl_tcp_adv_win_scale
      1228 |     u8 sysctl_tcp_dsack
      1229 |     u8 sysctl_tcp_app_win
      1230 |     u8 sysctl_tcp_frto
      1231 |     u8 sysctl_tcp_nometrics_save
      1232 |     u8 sysctl_tcp_no_ssthresh_metrics_save
      1233 |     u8 sysctl_tcp_workaround_signed_windows
      1236 |     int sysctl_tcp_challenge_ack_limit
      1240 |     u8 sysctl_tcp_min_tso_segs
      1241 |     u8 sysctl_tcp_reflect_tos
      1244 |     int sysctl_tcp_invalid_ratelimit
      1248 |     int sysctl_tcp_pacing_ss_ratio
      1252 |     int sysctl_tcp_pacing_ca_ratio
      1256 |     unsigned int sysctl_tcp_child_ehash_entries
      1260 |     unsigned long sysctl_tcp_comp_sack_delay_ns
      1264 |     unsigned long sysctl_tcp_comp_sack_slack_ns
      1268 |     int sysctl_max_syn_backlog
      1272 |     int sysctl_tcp_fastopen
      1276 |     const struct tcp_congestion_ops * tcp_congestion_control
      1280 |     struct tcp_fastopen_context * tcp_fastopen_ctx
      1284 |     unsigned int sysctl_tcp_fastopen_blackhole_timeout
      1288 |     atomic_t tfo_active_disable_times
      1288 |       int counter
      1292 |     unsigned long tfo_active_disable_stamp
      1296 |     u32 tcp_challenge_timestamp
      1300 |     u32 tcp_challenge_count
      1304 |     u8 sysctl_tcp_plb_enabled
      1305 |     u8 sysctl_tcp_plb_idle_rehash_rounds
      1306 |     u8 sysctl_tcp_plb_rehash_rounds
      1307 |     u8 sysctl_tcp_plb_suspend_rto_sec
      1308 |     int sysctl_tcp_plb_cong_thresh
      1312 |     int sysctl_udp_wmem_min
      1316 |     int sysctl_udp_rmem_min
      1320 |     u8 sysctl_fib_notify_on_flag_change
      1321 |     u8 sysctl_tcp_syn_linear_timeouts
      1322 |     u8 sysctl_igmp_llm_reports
      1324 |     int sysctl_igmp_max_memberships
      1328 |     int sysctl_igmp_max_msf
      1332 |     int sysctl_igmp_qrv
      1336 |     struct ping_group_range ping_group_range
      1336 |       seqlock_t lock
      1336 |         struct seqcount_spinlock seqcount
      1336 |           struct seqcount seqcount
      1336 |             unsigned int sequence
      1340 |             struct lockdep_map dep_map
      1340 |               struct lock_class_key * key
      1344 |               struct lock_class *[2] class_cache
      1352 |               const char * name
      1356 |               u8 wait_type_outer
      1357 |               u8 wait_type_inner
      1358 |               u8 lock_type
      1360 |               int cpu
      1364 |               unsigned long ip
      1368 |           spinlock_t * lock
      1372 |         struct spinlock lock
      1372 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1372 |             struct raw_spinlock rlock
      1372 |               arch_spinlock_t raw_lock
      1372 |                 volatile unsigned int slock
      1376 |               unsigned int magic
      1380 |               unsigned int owner_cpu
      1384 |               void * owner
      1388 |               struct lockdep_map dep_map
      1388 |                 struct lock_class_key * key
      1392 |                 struct lock_class *[2] class_cache
      1400 |                 const char * name
      1404 |                 u8 wait_type_outer
      1405 |                 u8 wait_type_inner
      1406 |                 u8 lock_type
      1408 |                 int cpu
      1412 |                 unsigned long ip
      1372 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1372 |               u8[16] __padding
      1388 |               struct lockdep_map dep_map
      1388 |                 struct lock_class_key * key
      1392 |                 struct lock_class *[2] class_cache
      1400 |                 const char * name
      1404 |                 u8 wait_type_outer
      1405 |                 u8 wait_type_inner
      1406 |                 u8 lock_type
      1408 |                 int cpu
      1412 |                 unsigned long ip
      1416 |       kgid_t[2] range
      1424 |     atomic_t dev_addr_genid
      1424 |       int counter
      1428 |     unsigned int sysctl_udp_child_hash_entries
      1432 |     unsigned long * sysctl_local_reserved_ports
      1436 |     int sysctl_ip_prot_sock
      1440 |     struct fib_notifier_ops * notifier_ops
      1444 |     unsigned int fib_seq
      1448 |     struct fib_notifier_ops * ipmr_notifier_ops
      1452 |     unsigned int ipmr_seq
      1456 |     atomic_t rt_genid
      1456 |       int counter
      1464 |     siphash_key_t ip_id_key
      1464 |       u64[2] key
      1480 |   struct netns_ipv6 ipv6
      1480 |     struct dst_ops ip6_dst_ops
      1480 |       unsigned short family
      1484 |       unsigned int gc_thresh
      1488 |       void (*)(struct dst_ops *) gc
      1492 |       struct dst_entry *(*)(struct dst_entry *, __u32) check
      1496 |       unsigned int (*)(const struct dst_entry *) default_advmss
      1500 |       unsigned int (*)(const struct dst_entry *) mtu
      1504 |       u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
      1508 |       void (*)(struct dst_entry *) destroy
      1512 |       void (*)(struct dst_entry *, struct net_device *) ifdown
      1516 |       void (*)(struct sock *, struct dst_entry *) negative_advice
      1520 |       void (*)(struct sk_buff *) link_failure
      1524 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
      1528 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
      1532 |       int (*)(struct net *, struct sock *, struct sk_buff *) local_out
      1536 |       struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
      1540 |       void (*)(const struct dst_entry *, const void *) confirm_neigh
      1544 |       struct kmem_cache * kmem_cachep
      1552 |       struct percpu_counter pcpuc_entries
      1552 |         s64 count
      1560 |     struct netns_sysctl_ipv6 sysctl
      1560 |       struct ctl_table_header * hdr
      1564 |       struct ctl_table_header * route_hdr
      1568 |       struct ctl_table_header * icmp_hdr
      1572 |       struct ctl_table_header * frags_hdr
      1576 |       struct ctl_table_header * xfrm6_hdr
      1580 |       int flush_delay
      1584 |       int ip6_rt_max_size
      1588 |       int ip6_rt_gc_min_interval
      1592 |       int ip6_rt_gc_timeout
      1596 |       int ip6_rt_gc_interval
      1600 |       int ip6_rt_gc_elasticity
      1604 |       int ip6_rt_mtu_expires
      1608 |       int ip6_rt_min_advmss
      1612 |       u32 multipath_hash_fields
      1616 |       u8 multipath_hash_policy
      1617 |       u8 bindv6only
      1618 |       u8 flowlabel_consistency
      1619 |       u8 auto_flowlabels
      1620 |       int icmpv6_time
      1624 |       u8 icmpv6_echo_ignore_all
      1625 |       u8 icmpv6_echo_ignore_multicast
      1626 |       u8 icmpv6_echo_ignore_anycast
      1628 |       unsigned long[8] icmpv6_ratemask
      1660 |       unsigned long * icmpv6_ratemask_ptr
      1664 |       u8 anycast_src_echo_reply
      1665 |       u8 ip_nonlocal_bind
      1666 |       u8 fwmark_reflect
      1667 |       u8 flowlabel_state_ranges
      1668 |       int idgen_retries
      1672 |       int idgen_delay
      1676 |       int flowlabel_reflect
      1680 |       int max_dst_opts_cnt
      1684 |       int max_hbh_opts_cnt
      1688 |       int max_dst_opts_len
      1692 |       int max_hbh_opts_len
      1696 |       int seg6_flowlabel
      1700 |       u32 ioam6_id
      1704 |       u64 ioam6_id_wide
      1712 |       u8 skip_notify_on_dev_down
      1713 |       u8 fib_notify_on_flag_change
      1714 |       u8 icmpv6_error_anycast_as_unicast
      1720 |     struct ipv6_devconf * devconf_all
      1724 |     struct ipv6_devconf * devconf_dflt
      1728 |     struct inet_peer_base * peers
      1732 |     struct fqdir * fqdir
      1736 |     struct fib6_info * fib6_null_entry
      1740 |     struct rt6_info * ip6_null_entry
      1744 |     struct rt6_statistics * rt6_stats
      1748 |     struct timer_list ip6_fib_timer
      1748 |       struct hlist_node entry
      1748 |         struct hlist_node * next
      1752 |         struct hlist_node ** pprev
      1756 |       unsigned long expires
      1760 |       void (*)(struct timer_list *) function
      1764 |       u32 flags
      1768 |       struct lockdep_map lockdep_map
      1768 |         struct lock_class_key * key
      1772 |         struct lock_class *[2] class_cache
      1780 |         const char * name
      1784 |         u8 wait_type_outer
      1785 |         u8 wait_type_inner
      1786 |         u8 lock_type
      1788 |         int cpu
      1792 |         unsigned long ip
      1796 |     struct hlist_head * fib_table_hash
      1800 |     struct fib6_table * fib6_main_tbl
      1804 |     struct list_head fib6_walkers
      1804 |       struct list_head * next
      1808 |       struct list_head * prev
      1812 |     rwlock_t fib6_walker_lock
      1812 |       arch_rwlock_t raw_lock
      1812 |       unsigned int magic
      1816 |       unsigned int owner_cpu
      1820 |       void * owner
      1824 |       struct lockdep_map dep_map
      1824 |         struct lock_class_key * key
      1828 |         struct lock_class *[2] class_cache
      1836 |         const char * name
      1840 |         u8 wait_type_outer
      1841 |         u8 wait_type_inner
      1842 |         u8 lock_type
      1844 |         int cpu
      1848 |         unsigned long ip
      1852 |     struct spinlock fib6_gc_lock
      1852 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1852 |         struct raw_spinlock rlock
      1852 |           arch_spinlock_t raw_lock
      1852 |             volatile unsigned int slock
      1856 |           unsigned int magic
      1860 |           unsigned int owner_cpu
      1864 |           void * owner
      1868 |           struct lockdep_map dep_map
      1868 |             struct lock_class_key * key
      1872 |             struct lock_class *[2] class_cache
      1880 |             const char * name
      1884 |             u8 wait_type_outer
      1885 |             u8 wait_type_inner
      1886 |             u8 lock_type
      1888 |             int cpu
      1892 |             unsigned long ip
      1852 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1852 |           u8[16] __padding
      1868 |           struct lockdep_map dep_map
      1868 |             struct lock_class_key * key
      1872 |             struct lock_class *[2] class_cache
      1880 |             const char * name
      1884 |             u8 wait_type_outer
      1885 |             u8 wait_type_inner
      1886 |             u8 lock_type
      1888 |             int cpu
      1892 |             unsigned long ip
      1896 |     atomic_t ip6_rt_gc_expire
      1896 |       int counter
      1900 |     unsigned long ip6_rt_last_gc
      1904 |     unsigned char flowlabel_has_excl
      1908 |     struct sock * ndisc_sk
      1912 |     struct sock * tcp_sk
      1916 |     struct sock * igmp_sk
      1920 |     struct sock * mc_autojoin_sk
      1924 |     struct hlist_head * inet6_addr_lst
      1928 |     struct spinlock addrconf_hash_lock
      1928 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1928 |         struct raw_spinlock rlock
      1928 |           arch_spinlock_t raw_lock
      1928 |             volatile unsigned int slock
      1932 |           unsigned int magic
      1936 |           unsigned int owner_cpu
      1940 |           void * owner
      1944 |           struct lockdep_map dep_map
      1944 |             struct lock_class_key * key
      1948 |             struct lock_class *[2] class_cache
      1956 |             const char * name
      1960 |             u8 wait_type_outer
      1961 |             u8 wait_type_inner
      1962 |             u8 lock_type
      1964 |             int cpu
      1968 |             unsigned long ip
      1928 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1928 |           u8[16] __padding
      1944 |           struct lockdep_map dep_map
      1944 |             struct lock_class_key * key
      1948 |             struct lock_class *[2] class_cache
      1956 |             const char * name
      1960 |             u8 wait_type_outer
      1961 |             u8 wait_type_inner
      1962 |             u8 lock_type
      1964 |             int cpu
      1968 |             unsigned long ip
      1972 |     struct delayed_work addr_chk_work
      1972 |       struct work_struct work
      1972 |         atomic_t data
      1972 |           int counter
      1976 |         struct list_head entry
      1976 |           struct list_head * next
      1980 |           struct list_head * prev
      1984 |         work_func_t func
      1988 |         struct lockdep_map lockdep_map
      1988 |           struct lock_class_key * key
      1992 |           struct lock_class *[2] class_cache
      2000 |           const char * name
      2004 |           u8 wait_type_outer
      2005 |           u8 wait_type_inner
      2006 |           u8 lock_type
      2008 |           int cpu
      2012 |           unsigned long ip
      2016 |       struct timer_list timer
      2016 |         struct hlist_node entry
      2016 |           struct hlist_node * next
      2020 |           struct hlist_node ** pprev
      2024 |         unsigned long expires
      2028 |         void (*)(struct timer_list *) function
      2032 |         u32 flags
      2036 |         struct lockdep_map lockdep_map
      2036 |           struct lock_class_key * key
      2040 |           struct lock_class *[2] class_cache
      2048 |           const char * name
      2052 |           u8 wait_type_outer
      2053 |           u8 wait_type_inner
      2054 |           u8 lock_type
      2056 |           int cpu
      2060 |           unsigned long ip
      2064 |       struct workqueue_struct * wq
      2068 |       int cpu
      2072 |     atomic_t dev_addr_genid
      2072 |       int counter
      2076 |     atomic_t fib6_sernum
      2076 |       int counter
      2080 |     struct seg6_pernet_data * seg6_data
      2084 |     struct fib_notifier_ops * notifier_ops
      2088 |     struct fib_notifier_ops * ip6mr_notifier_ops
      2092 |     unsigned int ipmr_seq
      2096 |     struct netns_ipv6::(unnamed at ../include/net/netns/ipv6.h:116:2) ip6addrlbl_table
      2096 |       struct hlist_head head
      2096 |         struct hlist_node * first
      2100 |       struct spinlock lock
      2100 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2100 |           struct raw_spinlock rlock
      2100 |             arch_spinlock_t raw_lock
      2100 |               volatile unsigned int slock
      2104 |             unsigned int magic
      2108 |             unsigned int owner_cpu
      2112 |             void * owner
      2116 |             struct lockdep_map dep_map
      2116 |               struct lock_class_key * key
      2120 |               struct lock_class *[2] class_cache
      2128 |               const char * name
      2132 |               u8 wait_type_outer
      2133 |               u8 wait_type_inner
      2134 |               u8 lock_type
      2136 |               int cpu
      2140 |               unsigned long ip
      2100 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2100 |             u8[16] __padding
      2116 |             struct lockdep_map dep_map
      2116 |               struct lock_class_key * key
      2120 |               struct lock_class *[2] class_cache
      2128 |               const char * name
      2132 |               u8 wait_type_outer
      2133 |               u8 wait_type_inner
      2134 |               u8 lock_type
      2136 |               int cpu
      2140 |               unsigned long ip
      2144 |       u32 seq
      2148 |     struct ioam6_pernet_data * ioam6_data
      2152 |   struct netns_sctp sctp
      2152 |     typeof(struct sctp_mib) * sctp_statistics
      2156 |     struct proc_dir_entry * proc_net_sctp
      2160 |     struct ctl_table_header * sysctl_header
      2164 |     struct sock * ctl_sock
      2168 |     struct sock * udp4_sock
      2172 |     struct sock * udp6_sock
      2176 |     int udp_port
      2180 |     int encap_port
      2184 |     struct list_head local_addr_list
      2184 |       struct list_head * next
      2188 |       struct list_head * prev
      2192 |     struct list_head addr_waitq
      2192 |       struct list_head * next
      2196 |       struct list_head * prev
      2200 |     struct timer_list addr_wq_timer
      2200 |       struct hlist_node entry
      2200 |         struct hlist_node * next
      2204 |         struct hlist_node ** pprev
      2208 |       unsigned long expires
      2212 |       void (*)(struct timer_list *) function
      2216 |       u32 flags
      2220 |       struct lockdep_map lockdep_map
      2220 |         struct lock_class_key * key
      2224 |         struct lock_class *[2] class_cache
      2232 |         const char * name
      2236 |         u8 wait_type_outer
      2237 |         u8 wait_type_inner
      2238 |         u8 lock_type
      2240 |         int cpu
      2244 |         unsigned long ip
      2248 |     struct list_head auto_asconf_splist
      2248 |       struct list_head * next
      2252 |       struct list_head * prev
      2256 |     struct spinlock addr_wq_lock
      2256 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2256 |         struct raw_spinlock rlock
      2256 |           arch_spinlock_t raw_lock
      2256 |             volatile unsigned int slock
      2260 |           unsigned int magic
      2264 |           unsigned int owner_cpu
      2268 |           void * owner
      2272 |           struct lockdep_map dep_map
      2272 |             struct lock_class_key * key
      2276 |             struct lock_class *[2] class_cache
      2284 |             const char * name
      2288 |             u8 wait_type_outer
      2289 |             u8 wait_type_inner
      2290 |             u8 lock_type
      2292 |             int cpu
      2296 |             unsigned long ip
      2256 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2256 |           u8[16] __padding
      2272 |           struct lockdep_map dep_map
      2272 |             struct lock_class_key * key
      2276 |             struct lock_class *[2] class_cache
      2284 |             const char * name
      2288 |             u8 wait_type_outer
      2289 |             u8 wait_type_inner
      2290 |             u8 lock_type
      2292 |             int cpu
      2296 |             unsigned long ip
      2300 |     struct spinlock local_addr_lock
      2300 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2300 |         struct raw_spinlock rlock
      2300 |           arch_spinlock_t raw_lock
      2300 |             volatile unsigned int slock
      2304 |           unsigned int magic
      2308 |           unsigned int owner_cpu
      2312 |           void * owner
      2316 |           struct lockdep_map dep_map
      2316 |             struct lock_class_key * key
      2320 |             struct lock_class *[2] class_cache
      2328 |             const char * name
      2332 |             u8 wait_type_outer
      2333 |             u8 wait_type_inner
      2334 |             u8 lock_type
      2336 |             int cpu
      2340 |             unsigned long ip
      2300 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2300 |           u8[16] __padding
      2316 |           struct lockdep_map dep_map
      2316 |             struct lock_class_key * key
      2320 |             struct lock_class *[2] class_cache
      2328 |             const char * name
      2332 |             u8 wait_type_outer
      2333 |             u8 wait_type_inner
      2334 |             u8 lock_type
      2336 |             int cpu
      2340 |             unsigned long ip
      2344 |     unsigned int rto_initial
      2348 |     unsigned int rto_min
      2352 |     unsigned int rto_max
      2356 |     int rto_alpha
      2360 |     int rto_beta
      2364 |     int max_burst
      2368 |     int cookie_preserve_enable
      2372 |     char * sctp_hmac_alg
      2376 |     unsigned int valid_cookie_life
      2380 |     unsigned int sack_timeout
      2384 |     unsigned int hb_interval
      2388 |     unsigned int probe_interval
      2392 |     int max_retrans_association
      2396 |     int max_retrans_path
      2400 |     int max_retrans_init
      2404 |     int pf_retrans
      2408 |     int ps_retrans
      2412 |     int pf_enable
      2416 |     int pf_expose
      2420 |     int sndbuf_policy
      2424 |     int rcvbuf_policy
      2428 |     int default_auto_asconf
      2432 |     int addip_enable
      2436 |     int addip_noauth
      2440 |     int prsctp_enable
      2444 |     int reconf_enable
      2448 |     int auth_enable
      2452 |     int intl_enable
      2456 |     int ecn_enable
      2460 |     int scope_policy
      2464 |     int rwnd_upd_shift
      2468 |     unsigned long max_autoclose
      2472 |   struct sk_buff_head wext_nlevents
      2472 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
      2472 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
      2472 |         struct sk_buff * next
      2476 |         struct sk_buff * prev
      2472 |       struct sk_buff_list list
      2472 |         struct sk_buff * next
      2476 |         struct sk_buff * prev
      2480 |     __u32 qlen
      2484 |     struct spinlock lock
      2484 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2484 |         struct raw_spinlock rlock
      2484 |           arch_spinlock_t raw_lock
      2484 |             volatile unsigned int slock
      2488 |           unsigned int magic
      2492 |           unsigned int owner_cpu
      2496 |           void * owner
      2500 |           struct lockdep_map dep_map
      2500 |             struct lock_class_key * key
      2504 |             struct lock_class *[2] class_cache
      2512 |             const char * name
      2516 |             u8 wait_type_outer
      2517 |             u8 wait_type_inner
      2518 |             u8 lock_type
      2520 |             int cpu
      2524 |             unsigned long ip
      2484 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2484 |           u8[16] __padding
      2500 |           struct lockdep_map dep_map
      2500 |             struct lock_class_key * key
      2504 |             struct lock_class *[2] class_cache
      2512 |             const char * name
      2516 |             u8 wait_type_outer
      2517 |             u8 wait_type_inner
      2518 |             u8 lock_type
      2520 |             int cpu
      2524 |             unsigned long ip
      2528 |   struct net_generic * gen
      2532 |   struct netns_bpf bpf
      2532 |     struct bpf_prog_array *[2] run_array
      2540 |     struct bpf_prog *[2] progs
      2548 |     struct list_head[2] links
      2568 |   struct netns_xfrm xfrm
      2568 |     struct list_head state_all
      2568 |       struct list_head * next
      2572 |       struct list_head * prev
      2576 |     struct hlist_head * state_bydst
      2580 |     struct hlist_head * state_bysrc
      2584 |     struct hlist_head * state_byspi
      2588 |     struct hlist_head * state_byseq
      2592 |     unsigned int state_hmask
      2596 |     unsigned int state_num
      2600 |     struct work_struct state_hash_work
      2600 |       atomic_t data
      2600 |         int counter
      2604 |       struct list_head entry
      2604 |         struct list_head * next
      2608 |         struct list_head * prev
      2612 |       work_func_t func
      2616 |       struct lockdep_map lockdep_map
      2616 |         struct lock_class_key * key
      2620 |         struct lock_class *[2] class_cache
      2628 |         const char * name
      2632 |         u8 wait_type_outer
      2633 |         u8 wait_type_inner
      2634 |         u8 lock_type
      2636 |         int cpu
      2640 |         unsigned long ip
      2644 |     struct list_head policy_all
      2644 |       struct list_head * next
      2648 |       struct list_head * prev
      2652 |     struct hlist_head * policy_byidx
      2656 |     unsigned int policy_idx_hmask
      2660 |     unsigned int idx_generator
      2664 |     struct hlist_head[3] policy_inexact
      2676 |     struct xfrm_policy_hash[3] policy_bydst
      2712 |     unsigned int[6] policy_count
      2736 |     struct work_struct policy_hash_work
      2736 |       atomic_t data
      2736 |         int counter
      2740 |       struct list_head entry
      2740 |         struct list_head * next
      2744 |         struct list_head * prev
      2748 |       work_func_t func
      2752 |       struct lockdep_map lockdep_map
      2752 |         struct lock_class_key * key
      2756 |         struct lock_class *[2] class_cache
      2764 |         const char * name
      2768 |         u8 wait_type_outer
      2769 |         u8 wait_type_inner
      2770 |         u8 lock_type
      2772 |         int cpu
      2776 |         unsigned long ip
      2780 |     struct xfrm_policy_hthresh policy_hthresh
      2780 |       struct work_struct work
      2780 |         atomic_t data
      2780 |           int counter
      2784 |         struct list_head entry
      2784 |           struct list_head * next
      2788 |           struct list_head * prev
      2792 |         work_func_t func
      2796 |         struct lockdep_map lockdep_map
      2796 |           struct lock_class_key * key
      2800 |           struct lock_class *[2] class_cache
      2808 |           const char * name
      2812 |           u8 wait_type_outer
      2813 |           u8 wait_type_inner
      2814 |           u8 lock_type
      2816 |           int cpu
      2820 |           unsigned long ip
      2824 |       seqlock_t lock
      2824 |         struct seqcount_spinlock seqcount
      2824 |           struct seqcount seqcount
      2824 |             unsigned int sequence
      2828 |             struct lockdep_map dep_map
      2828 |               struct lock_class_key * key
      2832 |               struct lock_class *[2] class_cache
      2840 |               const char * name
      2844 |               u8 wait_type_outer
      2845 |               u8 wait_type_inner
      2846 |               u8 lock_type
      2848 |               int cpu
      2852 |               unsigned long ip
      2856 |           spinlock_t * lock
      2860 |         struct spinlock lock
      2860 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2860 |             struct raw_spinlock rlock
      2860 |               arch_spinlock_t raw_lock
      2860 |                 volatile unsigned int slock
      2864 |               unsigned int magic
      2868 |               unsigned int owner_cpu
      2872 |               void * owner
      2876 |               struct lockdep_map dep_map
      2876 |                 struct lock_class_key * key
      2880 |                 struct lock_class *[2] class_cache
      2888 |                 const char * name
      2892 |                 u8 wait_type_outer
      2893 |                 u8 wait_type_inner
      2894 |                 u8 lock_type
      2896 |                 int cpu
      2900 |                 unsigned long ip
      2860 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2860 |               u8[16] __padding
      2876 |               struct lockdep_map dep_map
      2876 |                 struct lock_class_key * key
      2880 |                 struct lock_class *[2] class_cache
      2888 |                 const char * name
      2892 |                 u8 wait_type_outer
      2893 |                 u8 wait_type_inner
      2894 |                 u8 lock_type
      2896 |                 int cpu
      2900 |                 unsigned long ip
      2904 |       u8 lbits4
      2905 |       u8 rbits4
      2906 |       u8 lbits6
      2907 |       u8 rbits6
      2908 |     struct list_head inexact_bins
      2908 |       struct list_head * next
      2912 |       struct list_head * prev
      2916 |     struct sock * nlsk
      2920 |     struct sock * nlsk_stash
      2924 |     u32 sysctl_aevent_etime
      2928 |     u32 sysctl_aevent_rseqth
      2932 |     int sysctl_larval_drop
      2936 |     u32 sysctl_acq_expires
      2940 |     u8[3] policy_default
      2944 |     struct ctl_table_header * sysctl_hdr
      2952 |     struct dst_ops xfrm4_dst_ops
      2952 |       unsigned short family
      2956 |       unsigned int gc_thresh
      2960 |       void (*)(struct dst_ops *) gc
      2964 |       struct dst_entry *(*)(struct dst_entry *, __u32) check
      2968 |       unsigned int (*)(const struct dst_entry *) default_advmss
      2972 |       unsigned int (*)(const struct dst_entry *) mtu
      2976 |       u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
      2980 |       void (*)(struct dst_entry *) destroy
      2984 |       void (*)(struct dst_entry *, struct net_device *) ifdown
      2988 |       void (*)(struct sock *, struct dst_entry *) negative_advice
      2992 |       void (*)(struct sk_buff *) link_failure
      2996 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
      3000 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
      3004 |       int (*)(struct net *, struct sock *, struct sk_buff *) local_out
      3008 |       struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
      3012 |       void (*)(const struct dst_entry *, const void *) confirm_neigh
      3016 |       struct kmem_cache * kmem_cachep
      3024 |       struct percpu_counter pcpuc_entries
      3024 |         s64 count
      3032 |     struct dst_ops xfrm6_dst_ops
      3032 |       unsigned short family
      3036 |       unsigned int gc_thresh
      3040 |       void (*)(struct dst_ops *) gc
      3044 |       struct dst_entry *(*)(struct dst_entry *, __u32) check
      3048 |       unsigned int (*)(const struct dst_entry *) default_advmss
      3052 |       unsigned int (*)(const struct dst_entry *) mtu
      3056 |       u32 *(*)(struct dst_entry *, unsigned long) cow_metrics
      3060 |       void (*)(struct dst_entry *) destroy
      3064 |       void (*)(struct dst_entry *, struct net_device *) ifdown
      3068 |       void (*)(struct sock *, struct dst_entry *) negative_advice
      3072 |       void (*)(struct sk_buff *) link_failure
      3076 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *, u32, bool) update_pmtu
      3080 |       void (*)(struct dst_entry *, struct sock *, struct sk_buff *) redirect
      3084 |       int (*)(struct net *, struct sock *, struct sk_buff *) local_out
      3088 |       struct neighbour *(*)(const struct dst_entry *, struct sk_buff *, const void *) neigh_lookup
      3092 |       void (*)(const struct dst_entry *, const void *) confirm_neigh
      3096 |       struct kmem_cache * kmem_cachep
      3104 |       struct percpu_counter pcpuc_entries
      3104 |         s64 count
      3112 |     struct spinlock xfrm_state_lock
      3112 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      3112 |         struct raw_spinlock rlock
      3112 |           arch_spinlock_t raw_lock
      3112 |             volatile unsigned int slock
      3116 |           unsigned int magic
      3120 |           unsigned int owner_cpu
      3124 |           void * owner
      3128 |           struct lockdep_map dep_map
      3128 |             struct lock_class_key * key
      3132 |             struct lock_class *[2] class_cache
      3140 |             const char * name
      3144 |             u8 wait_type_outer
      3145 |             u8 wait_type_inner
      3146 |             u8 lock_type
      3148 |             int cpu
      3152 |             unsigned long ip
      3112 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      3112 |           u8[16] __padding
      3128 |           struct lockdep_map dep_map
      3128 |             struct lock_class_key * key
      3132 |             struct lock_class *[2] class_cache
      3140 |             const char * name
      3144 |             u8 wait_type_outer
      3145 |             u8 wait_type_inner
      3146 |             u8 lock_type
      3148 |             int cpu
      3152 |             unsigned long ip
      3156 |     struct seqcount_spinlock xfrm_state_hash_generation
      3156 |       struct seqcount seqcount
      3156 |         unsigned int sequence
      3160 |         struct lockdep_map dep_map
      3160 |           struct lock_class_key * key
      3164 |           struct lock_class *[2] class_cache
      3172 |           const char * name
      3176 |           u8 wait_type_outer
      3177 |           u8 wait_type_inner
      3178 |           u8 lock_type
      3180 |           int cpu
      3184 |           unsigned long ip
      3188 |       spinlock_t * lock
      3192 |     struct seqcount_spinlock xfrm_policy_hash_generation
      3192 |       struct seqcount seqcount
      3192 |         unsigned int sequence
      3196 |         struct lockdep_map dep_map
      3196 |           struct lock_class_key * key
      3200 |           struct lock_class *[2] class_cache
      3208 |           const char * name
      3212 |           u8 wait_type_outer
      3213 |           u8 wait_type_inner
      3214 |           u8 lock_type
      3216 |           int cpu
      3220 |           unsigned long ip
      3224 |       spinlock_t * lock
      3228 |     struct spinlock xfrm_policy_lock
      3228 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      3228 |         struct raw_spinlock rlock
      3228 |           arch_spinlock_t raw_lock
      3228 |             volatile unsigned int slock
      3232 |           unsigned int magic
      3236 |           unsigned int owner_cpu
      3240 |           void * owner
      3244 |           struct lockdep_map dep_map
      3244 |             struct lock_class_key * key
      3248 |             struct lock_class *[2] class_cache
      3256 |             const char * name
      3260 |             u8 wait_type_outer
      3261 |             u8 wait_type_inner
      3262 |             u8 lock_type
      3264 |             int cpu
      3268 |             unsigned long ip
      3228 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      3228 |           u8[16] __padding
      3244 |           struct lockdep_map dep_map
      3244 |             struct lock_class_key * key
      3248 |             struct lock_class *[2] class_cache
      3256 |             const char * name
      3260 |             u8 wait_type_outer
      3261 |             u8 wait_type_inner
      3262 |             u8 lock_type
      3264 |             int cpu
      3268 |             unsigned long ip
      3272 |     struct mutex xfrm_cfg_mutex
      3272 |       atomic_t owner
      3272 |         int counter
      3276 |       struct raw_spinlock wait_lock
      3276 |         arch_spinlock_t raw_lock
      3276 |           volatile unsigned int slock
      3280 |         unsigned int magic
      3284 |         unsigned int owner_cpu
      3288 |         void * owner
      3292 |         struct lockdep_map dep_map
      3292 |           struct lock_class_key * key
      3296 |           struct lock_class *[2] class_cache
      3304 |           const char * name
      3308 |           u8 wait_type_outer
      3309 |           u8 wait_type_inner
      3310 |           u8 lock_type
      3312 |           int cpu
      3316 |           unsigned long ip
      3320 |       struct list_head wait_list
      3320 |         struct list_head * next
      3324 |         struct list_head * prev
      3328 |       void * magic
      3332 |       struct lockdep_map dep_map
      3332 |         struct lock_class_key * key
      3336 |         struct lock_class *[2] class_cache
      3344 |         const char * name
      3348 |         u8 wait_type_outer
      3349 |         u8 wait_type_inner
      3350 |         u8 lock_type
      3352 |         int cpu
      3356 |         unsigned long ip
      3360 |     struct delayed_work nat_keepalive_work
      3360 |       struct work_struct work
      3360 |         atomic_t data
      3360 |           int counter
      3364 |         struct list_head entry
      3364 |           struct list_head * next
      3368 |           struct list_head * prev
      3372 |         work_func_t func
      3376 |         struct lockdep_map lockdep_map
      3376 |           struct lock_class_key * key
      3380 |           struct lock_class *[2] class_cache
      3388 |           const char * name
      3392 |           u8 wait_type_outer
      3393 |           u8 wait_type_inner
      3394 |           u8 lock_type
      3396 |           int cpu
      3400 |           unsigned long ip
      3404 |       struct timer_list timer
      3404 |         struct hlist_node entry
      3404 |           struct hlist_node * next
      3408 |           struct hlist_node ** pprev
      3412 |         unsigned long expires
      3416 |         void (*)(struct timer_list *) function
      3420 |         u32 flags
      3424 |         struct lockdep_map lockdep_map
      3424 |           struct lock_class_key * key
      3428 |           struct lock_class *[2] class_cache
      3436 |           const char * name
      3440 |           u8 wait_type_outer
      3441 |           u8 wait_type_inner
      3442 |           u8 lock_type
      3444 |           int cpu
      3448 |           unsigned long ip
      3452 |       struct workqueue_struct * wq
      3456 |       int cpu
      3464 |   u64 net_cookie
      3472 |   struct netns_mpls mpls
      3472 |     int ip_ttl_propagate
      3476 |     int default_ttl
      3480 |     size_t platform_labels
      3484 |     struct mpls_route ** platform_label
      3488 |     struct ctl_table_header * ctl
      3492 |   struct netns_can can
      3492 |     struct proc_dir_entry * proc_dir
      3496 |     struct proc_dir_entry * pde_stats
      3500 |     struct proc_dir_entry * pde_reset_stats
      3504 |     struct proc_dir_entry * pde_rcvlist_all
      3508 |     struct proc_dir_entry * pde_rcvlist_fil
      3512 |     struct proc_dir_entry * pde_rcvlist_inv
      3516 |     struct proc_dir_entry * pde_rcvlist_sff
      3520 |     struct proc_dir_entry * pde_rcvlist_eff
      3524 |     struct proc_dir_entry * pde_rcvlist_err
      3528 |     struct proc_dir_entry * bcmproc_dir
      3532 |     struct can_dev_rcv_lists * rx_alldev_list
      3536 |     struct spinlock rcvlists_lock
      3536 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      3536 |         struct raw_spinlock rlock
      3536 |           arch_spinlock_t raw_lock
      3536 |             volatile unsigned int slock
      3540 |           unsigned int magic
      3544 |           unsigned int owner_cpu
      3548 |           void * owner
      3552 |           struct lockdep_map dep_map
      3552 |             struct lock_class_key * key
      3556 |             struct lock_class *[2] class_cache
      3564 |             const char * name
      3568 |             u8 wait_type_outer
      3569 |             u8 wait_type_inner
      3570 |             u8 lock_type
      3572 |             int cpu
      3576 |             unsigned long ip
      3536 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      3536 |           u8[16] __padding
      3552 |           struct lockdep_map dep_map
      3552 |             struct lock_class_key * key
      3556 |             struct lock_class *[2] class_cache
      3564 |             const char * name
      3568 |             u8 wait_type_outer
      3569 |             u8 wait_type_inner
      3570 |             u8 lock_type
      3572 |             int cpu
      3576 |             unsigned long ip
      3580 |     struct timer_list stattimer
      3580 |       struct hlist_node entry
      3580 |         struct hlist_node * next
      3584 |         struct hlist_node ** pprev
      3588 |       unsigned long expires
      3592 |       void (*)(struct timer_list *) function
      3596 |       u32 flags
      3600 |       struct lockdep_map lockdep_map
      3600 |         struct lock_class_key * key
      3604 |         struct lock_class *[2] class_cache
      3612 |         const char * name
      3616 |         u8 wait_type_outer
      3617 |         u8 wait_type_inner
      3618 |         u8 lock_type
      3620 |         int cpu
      3624 |         unsigned long ip
      3628 |     struct can_pkg_stats * pkg_stats
      3632 |     struct can_rcv_lists_stats * rcv_lists_stats
      3636 |     struct hlist_head cgw_list
      3636 |       struct hlist_node * first
      3640 |   struct netns_xdp xdp
      3640 |     struct mutex lock
      3640 |       atomic_t owner
      3640 |         int counter
      3644 |       struct raw_spinlock wait_lock
      3644 |         arch_spinlock_t raw_lock
      3644 |           volatile unsigned int slock
      3648 |         unsigned int magic
      3652 |         unsigned int owner_cpu
      3656 |         void * owner
      3660 |         struct lockdep_map dep_map
      3660 |           struct lock_class_key * key
      3664 |           struct lock_class *[2] class_cache
      3672 |           const char * name
      3676 |           u8 wait_type_outer
      3677 |           u8 wait_type_inner
      3678 |           u8 lock_type
      3680 |           int cpu
      3684 |           unsigned long ip
      3688 |       struct list_head wait_list
      3688 |         struct list_head * next
      3692 |         struct list_head * prev
      3696 |       void * magic
      3700 |       struct lockdep_map dep_map
      3700 |         struct lock_class_key * key
      3704 |         struct lock_class *[2] class_cache
      3712 |         const char * name
      3716 |         u8 wait_type_outer
      3717 |         u8 wait_type_inner
      3718 |         u8 lock_type
      3720 |         int cpu
      3724 |         unsigned long ip
      3728 |     struct hlist_head list
      3728 |       struct hlist_node * first
      3732 |   struct netns_mctp mctp
      3732 |     struct list_head routes
      3732 |       struct list_head * next
      3736 |       struct list_head * prev
      3740 |     struct mutex bind_lock
      3740 |       atomic_t owner
      3740 |         int counter
      3744 |       struct raw_spinlock wait_lock
      3744 |         arch_spinlock_t raw_lock
      3744 |           volatile unsigned int slock
      3748 |         unsigned int magic
      3752 |         unsigned int owner_cpu
      3756 |         void * owner
      3760 |         struct lockdep_map dep_map
      3760 |           struct lock_class_key * key
      3764 |           struct lock_class *[2] class_cache
      3772 |           const char * name
      3776 |           u8 wait_type_outer
      3777 |           u8 wait_type_inner
      3778 |           u8 lock_type
      3780 |           int cpu
      3784 |           unsigned long ip
      3788 |       struct list_head wait_list
      3788 |         struct list_head * next
      3792 |         struct list_head * prev
      3796 |       void * magic
      3800 |       struct lockdep_map dep_map
      3800 |         struct lock_class_key * key
      3804 |         struct lock_class *[2] class_cache
      3812 |         const char * name
      3816 |         u8 wait_type_outer
      3817 |         u8 wait_type_inner
      3818 |         u8 lock_type
      3820 |         int cpu
      3824 |         unsigned long ip
      3828 |     struct hlist_head binds
      3828 |       struct hlist_node * first
      3832 |     struct spinlock keys_lock
      3832 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      3832 |         struct raw_spinlock rlock
      3832 |           arch_spinlock_t raw_lock
      3832 |             volatile unsigned int slock
      3836 |           unsigned int magic
      3840 |           unsigned int owner_cpu
      3844 |           void * owner
      3848 |           struct lockdep_map dep_map
      3848 |             struct lock_class_key * key
      3852 |             struct lock_class *[2] class_cache
      3860 |             const char * name
      3864 |             u8 wait_type_outer
      3865 |             u8 wait_type_inner
      3866 |             u8 lock_type
      3868 |             int cpu
      3872 |             unsigned long ip
      3832 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      3832 |           u8[16] __padding
      3848 |           struct lockdep_map dep_map
      3848 |             struct lock_class_key * key
      3852 |             struct lock_class *[2] class_cache
      3860 |             const char * name
      3864 |             u8 wait_type_outer
      3865 |             u8 wait_type_inner
      3866 |             u8 lock_type
      3868 |             int cpu
      3872 |             unsigned long ip
      3876 |     struct hlist_head keys
      3876 |       struct hlist_node * first
      3880 |     unsigned int default_net
      3884 |     struct mutex neigh_lock
      3884 |       atomic_t owner
      3884 |         int counter
      3888 |       struct raw_spinlock wait_lock
      3888 |         arch_spinlock_t raw_lock
      3888 |           volatile unsigned int slock
      3892 |         unsigned int magic
      3896 |         unsigned int owner_cpu
      3900 |         void * owner
      3904 |         struct lockdep_map dep_map
      3904 |           struct lock_class_key * key
      3908 |           struct lock_class *[2] class_cache
      3916 |           const char * name
      3920 |           u8 wait_type_outer
      3921 |           u8 wait_type_inner
      3922 |           u8 lock_type
      3924 |           int cpu
      3928 |           unsigned long ip
      3932 |       struct list_head wait_list
      3932 |         struct list_head * next
      3936 |         struct list_head * prev
      3940 |       void * magic
      3944 |       struct lockdep_map dep_map
      3944 |         struct lock_class_key * key
      3948 |         struct lock_class *[2] class_cache
      3956 |         const char * name
      3960 |         u8 wait_type_outer
      3961 |         u8 wait_type_inner
      3962 |         u8 lock_type
      3964 |         int cpu
      3968 |         unsigned long ip
      3972 |     struct list_head neighbours
      3972 |       struct list_head * next
      3976 |       struct list_head * prev
      3980 |   struct sock * crypto_nlsk
      3984 |   struct sock * diag_nlsk
      3988 |   struct netns_smc smc
      3988 |     struct smc_stats * smc_stats
      3992 |     struct mutex mutex_fback_rsn
      3992 |       atomic_t owner
      3992 |         int counter
      3996 |       struct raw_spinlock wait_lock
      3996 |         arch_spinlock_t raw_lock
      3996 |           volatile unsigned int slock
      4000 |         unsigned int magic
      4004 |         unsigned int owner_cpu
      4008 |         void * owner
      4012 |         struct lockdep_map dep_map
      4012 |           struct lock_class_key * key
      4016 |           struct lock_class *[2] class_cache
      4024 |           const char * name
      4028 |           u8 wait_type_outer
      4029 |           u8 wait_type_inner
      4030 |           u8 lock_type
      4032 |           int cpu
      4036 |           unsigned long ip
      4040 |       struct list_head wait_list
      4040 |         struct list_head * next
      4044 |         struct list_head * prev
      4048 |       void * magic
      4052 |       struct lockdep_map dep_map
      4052 |         struct lock_class_key * key
      4056 |         struct lock_class *[2] class_cache
      4064 |         const char * name
      4068 |         u8 wait_type_outer
      4069 |         u8 wait_type_inner
      4070 |         u8 lock_type
      4072 |         int cpu
      4076 |         unsigned long ip
      4080 |     struct smc_stats_rsn * fback_rsn
      4084 |     bool limit_smc_hs
      4088 |     struct ctl_table_header * smc_hdr
      4092 |     unsigned int sysctl_autocorking_size
      4096 |     unsigned int sysctl_smcr_buf_type
      4100 |     int sysctl_smcr_testlink_time
      4104 |     int sysctl_wmem
      4108 |     int sysctl_rmem
      4112 |     int sysctl_max_links_per_lgr
      4116 |     int sysctl_max_conns_per_lgr
           | [sizeof=4120, align=8]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:191:2)
         0 |   unsigned long rx_packets
         0 |   atomic_t __rx_packets
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct gro_list
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   int count
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct napi_struct
         0 |   struct list_head poll_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   unsigned long state
        12 |   int weight
        16 |   int defer_hard_irqs_count
        20 |   unsigned long gro_bitmask
        24 |   int (*)(struct napi_struct *, int) poll
        28 |   int list_owner
        32 |   struct net_device * dev
        36 |   struct gro_list[8] gro_hash
       132 |   struct sk_buff * skb
       136 |   struct list_head rx_list
       136 |     struct list_head * next
       140 |     struct list_head * prev
       144 |   int rx_count
       148 |   unsigned int napi_id
       152 |   struct hrtimer timer
       152 |     struct timerqueue_node node
       152 |       struct rb_node node
       152 |         unsigned long __rb_parent_color
       156 |         struct rb_node * rb_right
       160 |         struct rb_node * rb_left
       168 |       ktime_t expires
       176 |     ktime_t _softexpires
       184 |     enum hrtimer_restart (*)(struct hrtimer *) function
       188 |     struct hrtimer_clock_base * base
       192 |     u8 state
       193 |     u8 is_rel
       194 |     u8 is_soft
       195 |     u8 is_hard
       200 |   struct task_struct * thread
       204 |   struct list_head dev_list
       204 |     struct list_head * next
       208 |     struct list_head * prev
       212 |   struct hlist_node napi_hash_node
       212 |     struct hlist_node * next
       216 |     struct hlist_node ** pprev
       220 |   int irq
           | [sizeof=224, align=8]

*** Dumping AST Record Layout
         0 | struct net_device_path::(unnamed at ../include/linux/netdevice.h:841:3)
         0 |   u16 id
         2 |   __be16 proto
         4 |   u8[6] h_dest
           | [sizeof=10, align=2]

*** Dumping AST Record Layout
         0 | struct net_device_path::(unnamed at ../include/linux/netdevice.h:846:3)
         0 |   enum (unnamed enum at ../include/linux/netdevice.h:847:4) vlan_mode
         4 |   u16 vlan_id
         6 |   __be16 vlan_proto
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_path::(unnamed at ../include/linux/netdevice.h:856:3)
         0 |   int port
         4 |   u16 proto
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_path::(unnamed at ../include/linux/netdevice.h:860:3)
         0 |   u8 wdma_idx
         1 |   u8 queue
         2 |   u16 wcid
         4 |   u8 bss
         5 |   u8 amsdu
           | [sizeof=6, align=2]

*** Dumping AST Record Layout
         0 | union net_device_path::(anonymous at ../include/linux/netdevice.h:840:2)
         0 |   struct net_device_path::(unnamed at ../include/linux/netdevice.h:841:3) encap
         0 |     u16 id
         2 |     __be16 proto
         4 |     u8[6] h_dest
         0 |   struct net_device_path::(unnamed at ../include/linux/netdevice.h:846:3) bridge
         0 |     enum (unnamed enum at ../include/linux/netdevice.h:847:4) vlan_mode
         4 |     u16 vlan_id
         6 |     __be16 vlan_proto
         0 |   struct net_device_path::(unnamed at ../include/linux/netdevice.h:856:3) dsa
         0 |     int port
         4 |     u16 proto
         0 |   struct net_device_path::(unnamed at ../include/linux/netdevice.h:860:3) mtk_wdma
         0 |     u8 wdma_idx
         1 |     u8 queue
         2 |     u16 wcid
         4 |     u8 bss
         5 |     u8 amsdu
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_path
         0 |   enum net_device_path_type type
         4 |   const struct net_device * dev
         8 |   union net_device_path::(anonymous at ../include/linux/netdevice.h:840:2) 
         8 |     struct net_device_path::(unnamed at ../include/linux/netdevice.h:841:3) encap
         8 |       u16 id
        10 |       __be16 proto
        12 |       u8[6] h_dest
         8 |     struct net_device_path::(unnamed at ../include/linux/netdevice.h:846:3) bridge
         8 |       enum (unnamed enum at ../include/linux/netdevice.h:847:4) vlan_mode
        12 |       u16 vlan_id
        14 |       __be16 vlan_proto
         8 |     struct net_device_path::(unnamed at ../include/linux/netdevice.h:856:3) dsa
         8 |       int port
        12 |       u16 proto
         8 |     struct net_device_path::(unnamed at ../include/linux/netdevice.h:860:3) mtk_wdma
         8 |       u8 wdma_idx
         9 |       u8 queue
        10 |       u16 wcid
        12 |       u8 bss
        13 |       u8 amsdu
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_path_ctx::(unnamed at ../include/linux/netdevice.h:883:2)
         0 |   u16 id
         2 |   __be16 proto
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct netdev_bpf::(anonymous at ../include/linux/netdevice.h:954:3)
         0 |   u32 flags
         4 |   struct bpf_prog * prog
         8 |   struct netlink_ext_ack * extack
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct netdev_tc_txq
         0 |   u16 count
         2 |   u16 offset
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct bpf_xdp_entity
         0 |   struct bpf_prog * prog
         4 |   struct bpf_xdp_link * link
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union net_device::(anonymous at ../include/linux/netdevice.h:2074:2)
         0 |   struct pcpu_lstats * lstats
         0 |   struct pcpu_sw_netstats * tstats
         0 |   struct pcpu_dstats * dstats
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | possible_net_t
         0 |   struct net * net
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct net_device::(unnamed at ../include/linux/netdevice.h:2132:2)
         0 |   struct list_head upper
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct list_head lower
         8 |     struct list_head * next
        12 |     struct list_head * prev
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:192:2)
         0 |   unsigned long tx_packets
         0 |   atomic_t __tx_packets
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:193:2)
         0 |   unsigned long rx_bytes
         0 |   atomic_t __rx_bytes
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:194:2)
         0 |   unsigned long tx_bytes
         0 |   atomic_t __tx_bytes
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:195:2)
         0 |   unsigned long rx_errors
         0 |   atomic_t __rx_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:196:2)
         0 |   unsigned long tx_errors
         0 |   atomic_t __tx_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:197:2)
         0 |   unsigned long rx_dropped
         0 |   atomic_t __rx_dropped
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:198:2)
         0 |   unsigned long tx_dropped
         0 |   atomic_t __tx_dropped
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:199:2)
         0 |   unsigned long multicast
         0 |   atomic_t __multicast
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:200:2)
         0 |   unsigned long collisions
         0 |   atomic_t __collisions
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:201:2)
         0 |   unsigned long rx_length_errors
         0 |   atomic_t __rx_length_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:202:2)
         0 |   unsigned long rx_over_errors
         0 |   atomic_t __rx_over_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:203:2)
         0 |   unsigned long rx_crc_errors
         0 |   atomic_t __rx_crc_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:204:2)
         0 |   unsigned long rx_frame_errors
         0 |   atomic_t __rx_frame_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:205:2)
         0 |   unsigned long rx_fifo_errors
         0 |   atomic_t __rx_fifo_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:206:2)
         0 |   unsigned long rx_missed_errors
         0 |   atomic_t __rx_missed_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:207:2)
         0 |   unsigned long tx_aborted_errors
         0 |   atomic_t __tx_aborted_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:208:2)
         0 |   unsigned long tx_carrier_errors
         0 |   atomic_t __tx_carrier_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:209:2)
         0 |   unsigned long tx_fifo_errors
         0 |   atomic_t __tx_fifo_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:210:2)
         0 |   unsigned long tx_heartbeat_errors
         0 |   atomic_t __tx_heartbeat_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:211:2)
         0 |   unsigned long tx_window_errors
         0 |   atomic_t __tx_window_errors
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:212:2)
         0 |   unsigned long rx_compressed
         0 |   atomic_t __rx_compressed
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union net_device_stats::(anonymous at ../include/linux/netdevice.h:213:2)
         0 |   unsigned long tx_compressed
         0 |   atomic_t __tx_compressed
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_stats
         0 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:191:2) 
         0 |     unsigned long rx_packets
         0 |     atomic_t __rx_packets
         0 |       int counter
         4 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:192:2) 
         4 |     unsigned long tx_packets
         4 |     atomic_t __tx_packets
         4 |       int counter
         8 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:193:2) 
         8 |     unsigned long rx_bytes
         8 |     atomic_t __rx_bytes
         8 |       int counter
        12 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:194:2) 
        12 |     unsigned long tx_bytes
        12 |     atomic_t __tx_bytes
        12 |       int counter
        16 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:195:2) 
        16 |     unsigned long rx_errors
        16 |     atomic_t __rx_errors
        16 |       int counter
        20 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:196:2) 
        20 |     unsigned long tx_errors
        20 |     atomic_t __tx_errors
        20 |       int counter
        24 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:197:2) 
        24 |     unsigned long rx_dropped
        24 |     atomic_t __rx_dropped
        24 |       int counter
        28 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:198:2) 
        28 |     unsigned long tx_dropped
        28 |     atomic_t __tx_dropped
        28 |       int counter
        32 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:199:2) 
        32 |     unsigned long multicast
        32 |     atomic_t __multicast
        32 |       int counter
        36 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:200:2) 
        36 |     unsigned long collisions
        36 |     atomic_t __collisions
        36 |       int counter
        40 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:201:2) 
        40 |     unsigned long rx_length_errors
        40 |     atomic_t __rx_length_errors
        40 |       int counter
        44 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:202:2) 
        44 |     unsigned long rx_over_errors
        44 |     atomic_t __rx_over_errors
        44 |       int counter
        48 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:203:2) 
        48 |     unsigned long rx_crc_errors
        48 |     atomic_t __rx_crc_errors
        48 |       int counter
        52 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:204:2) 
        52 |     unsigned long rx_frame_errors
        52 |     atomic_t __rx_frame_errors
        52 |       int counter
        56 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:205:2) 
        56 |     unsigned long rx_fifo_errors
        56 |     atomic_t __rx_fifo_errors
        56 |       int counter
        60 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:206:2) 
        60 |     unsigned long rx_missed_errors
        60 |     atomic_t __rx_missed_errors
        60 |       int counter
        64 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:207:2) 
        64 |     unsigned long tx_aborted_errors
        64 |     atomic_t __tx_aborted_errors
        64 |       int counter
        68 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:208:2) 
        68 |     unsigned long tx_carrier_errors
        68 |     atomic_t __tx_carrier_errors
        68 |       int counter
        72 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:209:2) 
        72 |     unsigned long tx_fifo_errors
        72 |     atomic_t __tx_fifo_errors
        72 |       int counter
        76 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:210:2) 
        76 |     unsigned long tx_heartbeat_errors
        76 |     atomic_t __tx_heartbeat_errors
        76 |       int counter
        80 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:211:2) 
        80 |     unsigned long tx_window_errors
        80 |     atomic_t __tx_window_errors
        80 |       int counter
        84 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:212:2) 
        84 |     unsigned long rx_compressed
        84 |     atomic_t __rx_compressed
        84 |       int counter
        88 |   union net_device_stats::(anonymous at ../include/linux/netdevice.h:213:2) 
        88 |     unsigned long tx_compressed
        88 |     atomic_t __tx_compressed
        88 |       int counter
           | [sizeof=92, align=4]

*** Dumping AST Record Layout
         0 | struct netdev_hw_addr_list
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   int count
        12 |   struct rb_root tree
        12 |     struct rb_node * rb_node
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct net_device
         0 |   __u8[0] __cacheline_group_begin__net_device_read_tx
         0 |   unsigned long long priv_flags
         8 |   const struct net_device_ops * netdev_ops
        12 |   const struct header_ops * header_ops
        16 |   struct netdev_queue * _tx
        24 |   netdev_features_t gso_partial_features
        32 |   unsigned int real_num_tx_queues
        36 |   unsigned int gso_max_size
        40 |   unsigned int gso_ipv4_max_size
        44 |   u16 gso_max_segs
        46 |   s16 num_tc
        48 |   unsigned int mtu
        52 |   unsigned short needed_headroom
        54 |   struct netdev_tc_txq[16] tc_to_txq
       120 |   struct bpf_mprog_entry * tcx_egress
       124 |   __u8[0] __cacheline_group_end__net_device_read_tx
       124 |   __u8[0] __cacheline_group_begin__net_device_read_txrx
       124 |   union net_device::(anonymous at ../include/linux/netdevice.h:2074:2) 
       124 |     struct pcpu_lstats * lstats
       124 |     struct pcpu_sw_netstats * tstats
       124 |     struct pcpu_dstats * dstats
       128 |   unsigned long state
       132 |   unsigned int flags
       136 |   unsigned short hard_header_len
       144 |   netdev_features_t features
       152 |   struct inet6_dev * ip6_ptr
       156 |   __u8[0] __cacheline_group_end__net_device_read_txrx
       156 |   __u8[0] __cacheline_group_begin__net_device_read_rx
       156 |   struct bpf_prog * xdp_prog
       160 |   struct list_head ptype_specific
       160 |     struct list_head * next
       164 |     struct list_head * prev
       168 |   int ifindex
       172 |   unsigned int real_num_rx_queues
       176 |   struct netdev_rx_queue * _rx
       180 |   unsigned long gro_flush_timeout
       184 |   int napi_defer_hard_irqs
       188 |   unsigned int gro_max_size
       192 |   unsigned int gro_ipv4_max_size
       196 |   rx_handler_func_t * rx_handler
       200 |   void * rx_handler_data
       204 |   possible_net_t nd_net
       204 |     struct net * net
       208 |   struct bpf_mprog_entry * tcx_ingress
       212 |   __u8[0] __cacheline_group_end__net_device_read_rx
       212 |   char[16] name
       228 |   struct netdev_name_node * name_node
       232 |   struct dev_ifalias * ifalias
       236 |   unsigned long mem_end
       240 |   unsigned long mem_start
       244 |   unsigned long base_addr
       248 |   struct list_head dev_list
       248 |     struct list_head * next
       252 |     struct list_head * prev
       256 |   struct list_head napi_list
       256 |     struct list_head * next
       260 |     struct list_head * prev
       264 |   struct list_head unreg_list
       264 |     struct list_head * next
       268 |     struct list_head * prev
       272 |   struct list_head close_list
       272 |     struct list_head * next
       276 |     struct list_head * prev
       280 |   struct list_head ptype_all
       280 |     struct list_head * next
       284 |     struct list_head * prev
       288 |   struct net_device::(unnamed at ../include/linux/netdevice.h:2132:2) adj_list
       288 |     struct list_head upper
       288 |       struct list_head * next
       292 |       struct list_head * prev
       296 |     struct list_head lower
       296 |       struct list_head * next
       300 |       struct list_head * prev
       304 |   xdp_features_t xdp_features
       308 |   const struct xdp_metadata_ops * xdp_metadata_ops
       312 |   const struct xsk_tx_metadata_ops * xsk_tx_metadata_ops
       316 |   unsigned short gflags
       318 |   unsigned short needed_tailroom
       320 |   netdev_features_t hw_features
       328 |   netdev_features_t wanted_features
       336 |   netdev_features_t vlan_features
       344 |   netdev_features_t hw_enc_features
       352 |   netdev_features_t mpls_features
       360 |   unsigned int min_mtu
       364 |   unsigned int max_mtu
       368 |   unsigned short type
       370 |   unsigned char min_header_len
       371 |   unsigned char name_assign_type
       372 |   int group
       376 |   struct net_device_stats stats
       376 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:191:2) 
       376 |       unsigned long rx_packets
       376 |       atomic_t __rx_packets
       376 |         int counter
       380 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:192:2) 
       380 |       unsigned long tx_packets
       380 |       atomic_t __tx_packets
       380 |         int counter
       384 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:193:2) 
       384 |       unsigned long rx_bytes
       384 |       atomic_t __rx_bytes
       384 |         int counter
       388 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:194:2) 
       388 |       unsigned long tx_bytes
       388 |       atomic_t __tx_bytes
       388 |         int counter
       392 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:195:2) 
       392 |       unsigned long rx_errors
       392 |       atomic_t __rx_errors
       392 |         int counter
       396 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:196:2) 
       396 |       unsigned long tx_errors
       396 |       atomic_t __tx_errors
       396 |         int counter
       400 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:197:2) 
       400 |       unsigned long rx_dropped
       400 |       atomic_t __rx_dropped
       400 |         int counter
       404 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:198:2) 
       404 |       unsigned long tx_dropped
       404 |       atomic_t __tx_dropped
       404 |         int counter
       408 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:199:2) 
       408 |       unsigned long multicast
       408 |       atomic_t __multicast
       408 |         int counter
       412 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:200:2) 
       412 |       unsigned long collisions
       412 |       atomic_t __collisions
       412 |         int counter
       416 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:201:2) 
       416 |       unsigned long rx_length_errors
       416 |       atomic_t __rx_length_errors
       416 |         int counter
       420 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:202:2) 
       420 |       unsigned long rx_over_errors
       420 |       atomic_t __rx_over_errors
       420 |         int counter
       424 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:203:2) 
       424 |       unsigned long rx_crc_errors
       424 |       atomic_t __rx_crc_errors
       424 |         int counter
       428 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:204:2) 
       428 |       unsigned long rx_frame_errors
       428 |       atomic_t __rx_frame_errors
       428 |         int counter
       432 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:205:2) 
       432 |       unsigned long rx_fifo_errors
       432 |       atomic_t __rx_fifo_errors
       432 |         int counter
       436 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:206:2) 
       436 |       unsigned long rx_missed_errors
       436 |       atomic_t __rx_missed_errors
       436 |         int counter
       440 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:207:2) 
       440 |       unsigned long tx_aborted_errors
       440 |       atomic_t __tx_aborted_errors
       440 |         int counter
       444 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:208:2) 
       444 |       unsigned long tx_carrier_errors
       444 |       atomic_t __tx_carrier_errors
       444 |         int counter
       448 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:209:2) 
       448 |       unsigned long tx_fifo_errors
       448 |       atomic_t __tx_fifo_errors
       448 |         int counter
       452 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:210:2) 
       452 |       unsigned long tx_heartbeat_errors
       452 |       atomic_t __tx_heartbeat_errors
       452 |         int counter
       456 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:211:2) 
       456 |       unsigned long tx_window_errors
       456 |       atomic_t __tx_window_errors
       456 |         int counter
       460 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:212:2) 
       460 |       unsigned long rx_compressed
       460 |       atomic_t __rx_compressed
       460 |         int counter
       464 |     union net_device_stats::(anonymous at ../include/linux/netdevice.h:213:2) 
       464 |       unsigned long tx_compressed
       464 |       atomic_t __tx_compressed
       464 |         int counter
       468 |   struct net_device_core_stats * core_stats
       472 |   atomic_t carrier_up_count
       472 |     int counter
       476 |   atomic_t carrier_down_count
       476 |     int counter
       480 |   const struct ethtool_ops * ethtool_ops
       484 |   const struct ndisc_ops * ndisc_ops
       488 |   const struct xfrmdev_ops * xfrmdev_ops
       492 |   const struct tlsdev_ops * tlsdev_ops
       496 |   unsigned int operstate
       500 |   unsigned char link_mode
       501 |   unsigned char if_port
       502 |   unsigned char dma
       503 |   unsigned char[32] perm_addr
       535 |   unsigned char addr_assign_type
       536 |   unsigned char addr_len
       537 |   unsigned char upper_level
       538 |   unsigned char lower_level
       540 |   unsigned short neigh_priv_len
       542 |   unsigned short dev_id
       544 |   unsigned short dev_port
       548 |   int irq
       552 |   u32 priv_len
       556 |   struct spinlock addr_list_lock
       556 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       556 |       struct raw_spinlock rlock
       556 |         arch_spinlock_t raw_lock
       556 |           volatile unsigned int slock
       560 |         unsigned int magic
       564 |         unsigned int owner_cpu
       568 |         void * owner
       572 |         struct lockdep_map dep_map
       572 |           struct lock_class_key * key
       576 |           struct lock_class *[2] class_cache
       584 |           const char * name
       588 |           u8 wait_type_outer
       589 |           u8 wait_type_inner
       590 |           u8 lock_type
       592 |           int cpu
       596 |           unsigned long ip
       556 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       556 |         u8[16] __padding
       572 |         struct lockdep_map dep_map
       572 |           struct lock_class_key * key
       576 |           struct lock_class *[2] class_cache
       584 |           const char * name
       588 |           u8 wait_type_outer
       589 |           u8 wait_type_inner
       590 |           u8 lock_type
       592 |           int cpu
       596 |           unsigned long ip
       600 |   struct netdev_hw_addr_list uc
       600 |     struct list_head list
       600 |       struct list_head * next
       604 |       struct list_head * prev
       608 |     int count
       612 |     struct rb_root tree
       612 |       struct rb_node * rb_node
       616 |   struct netdev_hw_addr_list mc
       616 |     struct list_head list
       616 |       struct list_head * next
       620 |       struct list_head * prev
       624 |     int count
       628 |     struct rb_root tree
       628 |       struct rb_node * rb_node
       632 |   struct netdev_hw_addr_list dev_addrs
       632 |     struct list_head list
       632 |       struct list_head * next
       636 |       struct list_head * prev
       640 |     int count
       644 |     struct rb_root tree
       644 |       struct rb_node * rb_node
       648 |   struct kset * queues_kset
       652 |   struct list_head unlink_list
       652 |     struct list_head * next
       656 |     struct list_head * prev
       660 |   unsigned int promiscuity
       664 |   unsigned int allmulti
       668 |   bool uc_promisc
       669 |   unsigned char nested_level
       672 |   struct in_device * ip_ptr
       676 |   struct vlan_info * vlan_info
       680 |   struct tipc_bearer * tipc_ptr
       684 |   struct wireless_dev * ieee80211_ptr
       688 |   struct wpan_dev * ieee802154_ptr
       692 |   struct mctp_dev * mctp_ptr
       696 |   const unsigned char * dev_addr
       700 |   unsigned int num_rx_queues
       704 |   unsigned int xdp_zc_max_segs
       708 |   struct netdev_queue * ingress_queue
       712 |   unsigned char[32] broadcast
       744 |   struct hlist_node index_hlist
       744 |     struct hlist_node * next
       748 |     struct hlist_node ** pprev
       752 |   unsigned int num_tx_queues
       756 |   struct Qdisc * qdisc
       760 |   unsigned int tx_queue_len
       764 |   struct spinlock tx_global_lock
       764 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       764 |       struct raw_spinlock rlock
       764 |         arch_spinlock_t raw_lock
       764 |           volatile unsigned int slock
       768 |         unsigned int magic
       772 |         unsigned int owner_cpu
       776 |         void * owner
       780 |         struct lockdep_map dep_map
       780 |           struct lock_class_key * key
       784 |           struct lock_class *[2] class_cache
       792 |           const char * name
       796 |           u8 wait_type_outer
       797 |           u8 wait_type_inner
       798 |           u8 lock_type
       800 |           int cpu
       804 |           unsigned long ip
       764 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       764 |         u8[16] __padding
       780 |         struct lockdep_map dep_map
       780 |           struct lock_class_key * key
       784 |           struct lock_class *[2] class_cache
       792 |           const char * name
       796 |           u8 wait_type_outer
       797 |           u8 wait_type_inner
       798 |           u8 lock_type
       800 |           int cpu
       804 |           unsigned long ip
       808 |   struct xdp_dev_bulk_queue * xdp_bulkq
       812 |   struct hlist_head[16] qdisc_hash
       876 |   struct timer_list watchdog_timer
       876 |     struct hlist_node entry
       876 |       struct hlist_node * next
       880 |       struct hlist_node ** pprev
       884 |     unsigned long expires
       888 |     void (*)(struct timer_list *) function
       892 |     u32 flags
       896 |     struct lockdep_map lockdep_map
       896 |       struct lock_class_key * key
       900 |       struct lock_class *[2] class_cache
       908 |       const char * name
       912 |       u8 wait_type_outer
       913 |       u8 wait_type_inner
       914 |       u8 lock_type
       916 |       int cpu
       920 |       unsigned long ip
       924 |   int watchdog_timeo
       928 |   u32 proto_down_reason
       932 |   struct list_head todo_list
       932 |     struct list_head * next
       936 |     struct list_head * prev
       940 |   struct refcount_struct dev_refcnt
       940 |     atomic_t refs
       940 |       int counter
       944 |   struct ref_tracker_dir refcnt_tracker
       944 |     struct spinlock lock
       944 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       944 |         struct raw_spinlock rlock
       944 |           arch_spinlock_t raw_lock
       944 |             volatile unsigned int slock
       948 |           unsigned int magic
       952 |           unsigned int owner_cpu
       956 |           void * owner
       960 |           struct lockdep_map dep_map
       960 |             struct lock_class_key * key
       964 |             struct lock_class *[2] class_cache
       972 |             const char * name
       976 |             u8 wait_type_outer
       977 |             u8 wait_type_inner
       978 |             u8 lock_type
       980 |             int cpu
       984 |             unsigned long ip
       944 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       944 |           u8[16] __padding
       960 |           struct lockdep_map dep_map
       960 |             struct lock_class_key * key
       964 |             struct lock_class *[2] class_cache
       972 |             const char * name
       976 |             u8 wait_type_outer
       977 |             u8 wait_type_inner
       978 |             u8 lock_type
       980 |             int cpu
       984 |             unsigned long ip
       988 |     unsigned int quarantine_avail
       992 |     struct refcount_struct untracked
       992 |       atomic_t refs
       992 |         int counter
       996 |     struct refcount_struct no_tracker
       996 |       atomic_t refs
       996 |         int counter
      1000 |     bool dead
      1004 |     struct list_head list
      1004 |       struct list_head * next
      1008 |       struct list_head * prev
      1012 |     struct list_head quarantine
      1012 |       struct list_head * next
      1016 |       struct list_head * prev
      1020 |     char[32] name
      1052 |   struct list_head link_watch_list
      1052 |     struct list_head * next
      1056 |     struct list_head * prev
      1060 |   u8 reg_state
      1061 |   bool dismantle
 1062:0-15 |   enum (unnamed enum at ../include/linux/netdevice.h:2314:2) rtnl_link_state
      1064 |   bool needs_free_netdev
      1068 |   void (*)(struct net_device *) priv_destructor
      1072 |   void * ml_priv
      1076 |   enum netdev_ml_priv_type ml_priv_type
  1080:0-7 |   enum netdev_stat_type pcpu_stat_type
      1084 |   struct dm_hw_stat_delta * dm_private
      1088 |   struct device dev
      1088 |     struct kobject kobj
      1088 |       const char * name
      1092 |       struct list_head entry
      1092 |         struct list_head * next
      1096 |         struct list_head * prev
      1100 |       struct kobject * parent
      1104 |       struct kset * kset
      1108 |       const struct kobj_type * ktype
      1112 |       struct kernfs_node * sd
      1116 |       struct kref kref
      1116 |         struct refcount_struct refcount
      1116 |           atomic_t refs
      1116 |             int counter
  1120:0-0 |       unsigned int state_initialized
  1120:1-1 |       unsigned int state_in_sysfs
  1120:2-2 |       unsigned int state_add_uevent_sent
  1120:3-3 |       unsigned int state_remove_uevent_sent
  1120:4-4 |       unsigned int uevent_suppress
      1124 |       struct delayed_work release
      1124 |         struct work_struct work
      1124 |           atomic_t data
      1124 |             int counter
      1128 |           struct list_head entry
      1128 |             struct list_head * next
      1132 |             struct list_head * prev
      1136 |           work_func_t func
      1140 |           struct lockdep_map lockdep_map
      1140 |             struct lock_class_key * key
      1144 |             struct lock_class *[2] class_cache
      1152 |             const char * name
      1156 |             u8 wait_type_outer
      1157 |             u8 wait_type_inner
      1158 |             u8 lock_type
      1160 |             int cpu
      1164 |             unsigned long ip
      1168 |         struct timer_list timer
      1168 |           struct hlist_node entry
      1168 |             struct hlist_node * next
      1172 |             struct hlist_node ** pprev
      1176 |           unsigned long expires
      1180 |           void (*)(struct timer_list *) function
      1184 |           u32 flags
      1188 |           struct lockdep_map lockdep_map
      1188 |             struct lock_class_key * key
      1192 |             struct lock_class *[2] class_cache
      1200 |             const char * name
      1204 |             u8 wait_type_outer
      1205 |             u8 wait_type_inner
      1206 |             u8 lock_type
      1208 |             int cpu
      1212 |             unsigned long ip
      1216 |         struct workqueue_struct * wq
      1220 |         int cpu
      1224 |     struct device * parent
      1228 |     struct device_private * p
      1232 |     const char * init_name
      1236 |     const struct device_type * type
      1240 |     const struct bus_type * bus
      1244 |     struct device_driver * driver
      1248 |     void * platform_data
      1252 |     void * driver_data
      1256 |     struct mutex mutex
      1256 |       atomic_t owner
      1256 |         int counter
      1260 |       struct raw_spinlock wait_lock
      1260 |         arch_spinlock_t raw_lock
      1260 |           volatile unsigned int slock
      1264 |         unsigned int magic
      1268 |         unsigned int owner_cpu
      1272 |         void * owner
      1276 |         struct lockdep_map dep_map
      1276 |           struct lock_class_key * key
      1280 |           struct lock_class *[2] class_cache
      1288 |           const char * name
      1292 |           u8 wait_type_outer
      1293 |           u8 wait_type_inner
      1294 |           u8 lock_type
      1296 |           int cpu
      1300 |           unsigned long ip
      1304 |       struct list_head wait_list
      1304 |         struct list_head * next
      1308 |         struct list_head * prev
      1312 |       void * magic
      1316 |       struct lockdep_map dep_map
      1316 |         struct lock_class_key * key
      1320 |         struct lock_class *[2] class_cache
      1328 |         const char * name
      1332 |         u8 wait_type_outer
      1333 |         u8 wait_type_inner
      1334 |         u8 lock_type
      1336 |         int cpu
      1340 |         unsigned long ip
      1344 |     struct dev_links_info links
      1344 |       struct list_head suppliers
      1344 |         struct list_head * next
      1348 |         struct list_head * prev
      1352 |       struct list_head consumers
      1352 |         struct list_head * next
      1356 |         struct list_head * prev
      1360 |       struct list_head defer_sync
      1360 |         struct list_head * next
      1364 |         struct list_head * prev
      1368 |       enum dl_dev_state status
      1372 |     struct dev_pm_info power
      1372 |       struct pm_message power_state
      1372 |         int event
  1376:0-0 |       bool can_wakeup
  1376:1-1 |       bool async_suspend
  1376:2-2 |       bool in_dpm_list
  1376:3-3 |       bool is_prepared
  1376:4-4 |       bool is_suspended
  1376:5-5 |       bool is_noirq_suspended
  1376:6-6 |       bool is_late_suspended
  1376:7-7 |       bool no_pm
  1377:0-0 |       bool early_init
  1377:1-1 |       bool direct_complete
      1380 |       u32 driver_flags
      1384 |       struct spinlock lock
      1384 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1384 |           struct raw_spinlock rlock
      1384 |             arch_spinlock_t raw_lock
      1384 |               volatile unsigned int slock
      1388 |             unsigned int magic
      1392 |             unsigned int owner_cpu
      1396 |             void * owner
      1400 |             struct lockdep_map dep_map
      1400 |               struct lock_class_key * key
      1404 |               struct lock_class *[2] class_cache
      1412 |               const char * name
      1416 |               u8 wait_type_outer
      1417 |               u8 wait_type_inner
      1418 |               u8 lock_type
      1420 |               int cpu
      1424 |               unsigned long ip
      1384 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1384 |             u8[16] __padding
      1400 |             struct lockdep_map dep_map
      1400 |               struct lock_class_key * key
      1404 |               struct lock_class *[2] class_cache
      1412 |               const char * name
      1416 |               u8 wait_type_outer
      1417 |               u8 wait_type_inner
      1418 |               u8 lock_type
      1420 |               int cpu
      1424 |               unsigned long ip
  1428:0-0 |       bool should_wakeup
      1432 |       struct pm_subsys_data * subsys_data
      1436 |       void (*)(struct device *, s32) set_latency_tolerance
      1440 |       struct dev_pm_qos * qos
      1444 |     struct dev_pm_domain * pm_domain
      1448 |     struct dev_msi_info msi
      1448 |     u64 * dma_mask
      1456 |     u64 coherent_dma_mask
      1464 |     u64 bus_dma_limit
      1472 |     const struct bus_dma_region * dma_range_map
      1476 |     struct device_dma_parameters * dma_parms
      1480 |     struct list_head dma_pools
      1480 |       struct list_head * next
      1484 |       struct list_head * prev
      1488 |     struct dma_coherent_mem * dma_mem
      1492 |     struct dev_archdata archdata
      1492 |     struct device_node * of_node
      1496 |     struct fwnode_handle * fwnode
      1500 |     dev_t devt
      1504 |     u32 id
      1508 |     struct spinlock devres_lock
      1508 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1508 |         struct raw_spinlock rlock
      1508 |           arch_spinlock_t raw_lock
      1508 |             volatile unsigned int slock
      1512 |           unsigned int magic
      1516 |           unsigned int owner_cpu
      1520 |           void * owner
      1524 |           struct lockdep_map dep_map
      1524 |             struct lock_class_key * key
      1528 |             struct lock_class *[2] class_cache
      1536 |             const char * name
      1540 |             u8 wait_type_outer
      1541 |             u8 wait_type_inner
      1542 |             u8 lock_type
      1544 |             int cpu
      1548 |             unsigned long ip
      1508 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1508 |           u8[16] __padding
      1524 |           struct lockdep_map dep_map
      1524 |             struct lock_class_key * key
      1528 |             struct lock_class *[2] class_cache
      1536 |             const char * name
      1540 |             u8 wait_type_outer
      1541 |             u8 wait_type_inner
      1542 |             u8 lock_type
      1544 |             int cpu
      1548 |             unsigned long ip
      1552 |     struct list_head devres_head
      1552 |       struct list_head * next
      1556 |       struct list_head * prev
      1560 |     const struct class * class
      1564 |     const struct attribute_group ** groups
      1568 |     void (*)(struct device *) release
      1572 |     struct iommu_group * iommu_group
      1576 |     struct dev_iommu * iommu
      1580 |     struct device_physical_location * physical_location
      1584 |     enum device_removable removable
  1588:0-0 |     bool offline_disabled
  1588:1-1 |     bool offline
  1588:2-2 |     bool of_node_reused
  1588:3-3 |     bool state_synced
  1588:4-4 |     bool can_match
  1588:5-5 |     bool dma_coherent
  1588:6-6 |     bool dma_skip_sync
      1592 |   const struct attribute_group *[4] sysfs_groups
      1608 |   const struct attribute_group * sysfs_rx_queue_group
      1612 |   const struct rtnl_link_ops * rtnl_link_ops
      1616 |   const struct netdev_stat_ops * stat_ops
      1620 |   const struct netdev_queue_mgmt_ops * queue_mgmt_ops
      1624 |   unsigned int tso_max_size
      1628 |   u16 tso_max_segs
      1630 |   u8[16] prio_tc_map
      1648 |   struct phy_device * phydev
      1652 |   struct sfp_bus * sfp_bus
      1656 |   struct lock_class_key * qdisc_tx_busylock
      1660 |   bool proto_down
      1661 |   bool threaded
      1664 |   struct list_head net_notifier_list
      1664 |     struct list_head * next
      1668 |     struct list_head * prev
      1672 |   const struct udp_tunnel_nic_info * udp_tunnel_nic_info
      1676 |   struct udp_tunnel_nic * udp_tunnel_nic
      1680 |   struct ethtool_netdev_state * ethtool
      1684 |   struct bpf_xdp_entity[3] xdp_state
      1708 |   u8[32] dev_addr_shadow
      1740 |   netdevice_tracker linkwatch_dev_tracker
      1744 |   netdevice_tracker watchdog_dev_tracker
      1748 |   netdevice_tracker dev_registered_tracker
      1752 |   struct rtnl_hw_stats64 * offload_xstats_l3
      1756 |   struct devlink_port * devlink_port
      1760 |   struct hlist_head page_pools
      1760 |     struct hlist_node * first
      1764 |   struct dim_irq_moder * irq_moder
      1792 |   u8[] priv
           | [sizeof=1792, align=32]

*** Dumping AST Record Layout
         0 | struct netdev_queue
         0 |   struct net_device * dev
         4 |   netdevice_tracker dev_tracker
         8 |   struct Qdisc * qdisc
        12 |   struct Qdisc * qdisc_sleeping
        16 |   struct kobject kobj
        16 |     const char * name
        20 |     struct list_head entry
        20 |       struct list_head * next
        24 |       struct list_head * prev
        28 |     struct kobject * parent
        32 |     struct kset * kset
        36 |     const struct kobj_type * ktype
        40 |     struct kernfs_node * sd
        44 |     struct kref kref
        44 |       struct refcount_struct refcount
        44 |         atomic_t refs
        44 |           int counter
    48:0-0 |     unsigned int state_initialized
    48:1-1 |     unsigned int state_in_sysfs
    48:2-2 |     unsigned int state_add_uevent_sent
    48:3-3 |     unsigned int state_remove_uevent_sent
    48:4-4 |     unsigned int uevent_suppress
        52 |     struct delayed_work release
        52 |       struct work_struct work
        52 |         atomic_t data
        52 |           int counter
        56 |         struct list_head entry
        56 |           struct list_head * next
        60 |           struct list_head * prev
        64 |         work_func_t func
        68 |         struct lockdep_map lockdep_map
        68 |           struct lock_class_key * key
        72 |           struct lock_class *[2] class_cache
        80 |           const char * name
        84 |           u8 wait_type_outer
        85 |           u8 wait_type_inner
        86 |           u8 lock_type
        88 |           int cpu
        92 |           unsigned long ip
        96 |       struct timer_list timer
        96 |         struct hlist_node entry
        96 |           struct hlist_node * next
       100 |           struct hlist_node ** pprev
       104 |         unsigned long expires
       108 |         void (*)(struct timer_list *) function
       112 |         u32 flags
       116 |         struct lockdep_map lockdep_map
       116 |           struct lock_class_key * key
       120 |           struct lock_class *[2] class_cache
       128 |           const char * name
       132 |           u8 wait_type_outer
       133 |           u8 wait_type_inner
       134 |           u8 lock_type
       136 |           int cpu
       140 |           unsigned long ip
       144 |       struct workqueue_struct * wq
       148 |       int cpu
       152 |   unsigned long tx_maxrate
       156 |   atomic_t trans_timeout
       156 |     int counter
       160 |   struct net_device * sb_dev
       164 |   struct xsk_buff_pool * pool
       168 |   struct napi_struct * napi
       172 |   struct spinlock _xmit_lock
       172 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       172 |       struct raw_spinlock rlock
       172 |         arch_spinlock_t raw_lock
       172 |           volatile unsigned int slock
       176 |         unsigned int magic
       180 |         unsigned int owner_cpu
       184 |         void * owner
       188 |         struct lockdep_map dep_map
       188 |           struct lock_class_key * key
       192 |           struct lock_class *[2] class_cache
       200 |           const char * name
       204 |           u8 wait_type_outer
       205 |           u8 wait_type_inner
       206 |           u8 lock_type
       208 |           int cpu
       212 |           unsigned long ip
       172 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       172 |         u8[16] __padding
       188 |         struct lockdep_map dep_map
       188 |           struct lock_class_key * key
       192 |           struct lock_class *[2] class_cache
       200 |           const char * name
       204 |           u8 wait_type_outer
       205 |           u8 wait_type_inner
       206 |           u8 lock_type
       208 |           int cpu
       212 |           unsigned long ip
       216 |   int xmit_lock_owner
       220 |   unsigned long trans_start
       224 |   unsigned long state
           | [sizeof=228, align=4]

*** Dumping AST Record Layout
         0 | u64_stats_t
         0 |   u64 v
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct pcpu_sw_netstats
         0 |   u64_stats_t rx_packets
         0 |     u64 v
         8 |   u64_stats_t rx_bytes
         8 |     u64 v
        16 |   u64_stats_t tx_packets
        16 |     u64 v
        24 |   u64_stats_t tx_bytes
        24 |     u64 v
        32 |   struct u64_stats_sync syncp
        32 |     struct seqcount seq
        32 |       unsigned int sequence
        36 |       struct lockdep_map dep_map
        36 |         struct lock_class_key * key
        40 |         struct lock_class *[2] class_cache
        48 |         const char * name
        52 |         u8 wait_type_outer
        53 |         u8 wait_type_inner
        54 |         u8 lock_type
        56 |         int cpu
        60 |         unsigned long ip
           | [sizeof=64, align=32]

*** Dumping AST Record Layout
         0 | struct pcpu_lstats
         0 |   u64_stats_t packets
         0 |     u64 v
         8 |   u64_stats_t bytes
         8 |     u64 v
        16 |   struct u64_stats_sync syncp
        16 |     struct seqcount seq
        16 |       unsigned int sequence
        20 |       struct lockdep_map dep_map
        20 |         struct lock_class_key * key
        24 |         struct lock_class *[2] class_cache
        32 |         const char * name
        36 |         u8 wait_type_outer
        37 |         u8 wait_type_inner
        38 |         u8 lock_type
        40 |         int cpu
        44 |         unsigned long ip
           | [sizeof=48, align=16]

*** Dumping AST Record Layout
         0 | struct netdev_notifier_info
         0 |   struct net_device * dev
         4 |   struct netlink_ext_ack * extack
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct netdev_xmit
         0 |   u16 recursion
         2 |   u8 more
         3 |   u8 skip_txqueue
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct softnet_data
         0 |   struct list_head poll_list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct sk_buff_head process_queue
         8 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
         8 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
         8 |         struct sk_buff * next
        12 |         struct sk_buff * prev
         8 |       struct sk_buff_list list
         8 |         struct sk_buff * next
        12 |         struct sk_buff * prev
        16 |     __u32 qlen
        20 |     struct spinlock lock
        20 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        20 |         struct raw_spinlock rlock
        20 |           arch_spinlock_t raw_lock
        20 |             volatile unsigned int slock
        24 |           unsigned int magic
        28 |           unsigned int owner_cpu
        32 |           void * owner
        36 |           struct lockdep_map dep_map
        36 |             struct lock_class_key * key
        40 |             struct lock_class *[2] class_cache
        48 |             const char * name
        52 |             u8 wait_type_outer
        53 |             u8 wait_type_inner
        54 |             u8 lock_type
        56 |             int cpu
        60 |             unsigned long ip
        20 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        20 |           u8[16] __padding
        36 |           struct lockdep_map dep_map
        36 |             struct lock_class_key * key
        40 |             struct lock_class *[2] class_cache
        48 |             const char * name
        52 |             u8 wait_type_outer
        53 |             u8 wait_type_inner
        54 |             u8 lock_type
        56 |             int cpu
        60 |             unsigned long ip
        64 |   local_lock_t process_queue_bh_lock
        64 |     struct lockdep_map dep_map
        64 |       struct lock_class_key * key
        68 |       struct lock_class *[2] class_cache
        76 |       const char * name
        80 |       u8 wait_type_outer
        81 |       u8 wait_type_inner
        82 |       u8 lock_type
        84 |       int cpu
        88 |       unsigned long ip
        92 |     struct task_struct * owner
        96 |   unsigned int processed
       100 |   unsigned int time_squeeze
       104 |   unsigned int received_rps
       108 |   bool in_net_rx_action
       109 |   bool in_napi_threaded_poll
       112 |   struct Qdisc * output_queue
       116 |   struct Qdisc ** output_queue_tailp
       120 |   struct sk_buff * completion_queue
       124 |   struct sk_buff_head xfrm_backlog
       124 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       124 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       124 |         struct sk_buff * next
       128 |         struct sk_buff * prev
       124 |       struct sk_buff_list list
       124 |         struct sk_buff * next
       128 |         struct sk_buff * prev
       132 |     __u32 qlen
       136 |     struct spinlock lock
       136 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       136 |         struct raw_spinlock rlock
       136 |           arch_spinlock_t raw_lock
       136 |             volatile unsigned int slock
       140 |           unsigned int magic
       144 |           unsigned int owner_cpu
       148 |           void * owner
       152 |           struct lockdep_map dep_map
       152 |             struct lock_class_key * key
       156 |             struct lock_class *[2] class_cache
       164 |             const char * name
       168 |             u8 wait_type_outer
       169 |             u8 wait_type_inner
       170 |             u8 lock_type
       172 |             int cpu
       176 |             unsigned long ip
       136 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       136 |           u8[16] __padding
       152 |           struct lockdep_map dep_map
       152 |             struct lock_class_key * key
       156 |             struct lock_class *[2] class_cache
       164 |             const char * name
       168 |             u8 wait_type_outer
       169 |             u8 wait_type_inner
       170 |             u8 lock_type
       172 |             int cpu
       176 |             unsigned long ip
       180 |   struct netdev_xmit xmit
       180 |     u16 recursion
       182 |     u8 more
       183 |     u8 skip_txqueue
       184 |   struct sk_buff_head input_pkt_queue
       184 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       184 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       184 |         struct sk_buff * next
       188 |         struct sk_buff * prev
       184 |       struct sk_buff_list list
       184 |         struct sk_buff * next
       188 |         struct sk_buff * prev
       192 |     __u32 qlen
       196 |     struct spinlock lock
       196 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       196 |         struct raw_spinlock rlock
       196 |           arch_spinlock_t raw_lock
       196 |             volatile unsigned int slock
       200 |           unsigned int magic
       204 |           unsigned int owner_cpu
       208 |           void * owner
       212 |           struct lockdep_map dep_map
       212 |             struct lock_class_key * key
       216 |             struct lock_class *[2] class_cache
       224 |             const char * name
       228 |             u8 wait_type_outer
       229 |             u8 wait_type_inner
       230 |             u8 lock_type
       232 |             int cpu
       236 |             unsigned long ip
       196 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       196 |           u8[16] __padding
       212 |           struct lockdep_map dep_map
       212 |             struct lock_class_key * key
       216 |             struct lock_class *[2] class_cache
       224 |             const char * name
       228 |             u8 wait_type_outer
       229 |             u8 wait_type_inner
       230 |             u8 lock_type
       232 |             int cpu
       236 |             unsigned long ip
       240 |   struct napi_struct backlog
       240 |     struct list_head poll_list
       240 |       struct list_head * next
       244 |       struct list_head * prev
       248 |     unsigned long state
       252 |     int weight
       256 |     int defer_hard_irqs_count
       260 |     unsigned long gro_bitmask
       264 |     int (*)(struct napi_struct *, int) poll
       268 |     int list_owner
       272 |     struct net_device * dev
       276 |     struct gro_list[8] gro_hash
       372 |     struct sk_buff * skb
       376 |     struct list_head rx_list
       376 |       struct list_head * next
       380 |       struct list_head * prev
       384 |     int rx_count
       388 |     unsigned int napi_id
       392 |     struct hrtimer timer
       392 |       struct timerqueue_node node
       392 |         struct rb_node node
       392 |           unsigned long __rb_parent_color
       396 |           struct rb_node * rb_right
       400 |           struct rb_node * rb_left
       408 |         ktime_t expires
       416 |       ktime_t _softexpires
       424 |       enum hrtimer_restart (*)(struct hrtimer *) function
       428 |       struct hrtimer_clock_base * base
       432 |       u8 state
       433 |       u8 is_rel
       434 |       u8 is_soft
       435 |       u8 is_hard
       440 |     struct task_struct * thread
       444 |     struct list_head dev_list
       444 |       struct list_head * next
       448 |       struct list_head * prev
       452 |     struct hlist_node napi_hash_node
       452 |       struct hlist_node * next
       456 |       struct hlist_node ** pprev
       460 |     int irq
       464 |   atomic_t dropped
       464 |     int counter
       468 |   struct spinlock defer_lock
       468 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       468 |       struct raw_spinlock rlock
       468 |         arch_spinlock_t raw_lock
       468 |           volatile unsigned int slock
       472 |         unsigned int magic
       476 |         unsigned int owner_cpu
       480 |         void * owner
       484 |         struct lockdep_map dep_map
       484 |           struct lock_class_key * key
       488 |           struct lock_class *[2] class_cache
       496 |           const char * name
       500 |           u8 wait_type_outer
       501 |           u8 wait_type_inner
       502 |           u8 lock_type
       504 |           int cpu
       508 |           unsigned long ip
       468 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       468 |         u8[16] __padding
       484 |         struct lockdep_map dep_map
       484 |           struct lock_class_key * key
       488 |           struct lock_class *[2] class_cache
       496 |           const char * name
       500 |           u8 wait_type_outer
       501 |           u8 wait_type_inner
       502 |           u8 lock_type
       504 |           int cpu
       508 |           unsigned long ip
       512 |   int defer_count
       516 |   int defer_ipi_scheduled
       520 |   struct sk_buff * defer_list
       528 |   struct __call_single_data defer_csd
       528 |     struct __call_single_node node
       528 |       struct llist_node llist
       528 |         struct llist_node * next
       532 |       union __call_single_node::(anonymous at ../include/linux/smp_types.h:60:2) 
       532 |         unsigned int u_flags
       532 |         atomic_t a_flags
       532 |           int counter
       536 |     smp_call_func_t func
       540 |     void * info
           | [sizeof=544, align=16]

*** Dumping AST Record Layout
         0 | struct ifslave
         0 |   __s32 slave_id
         4 |   char[16] slave_name
        20 |   __s8 link
        21 |   __s8 state
        24 |   __u32 link_failure_count
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct net_device_ops
         0 |   int (*)(struct net_device *) ndo_init
         4 |   void (*)(struct net_device *) ndo_uninit
         8 |   int (*)(struct net_device *) ndo_open
        12 |   int (*)(struct net_device *) ndo_stop
        16 |   netdev_tx_t (*)(struct sk_buff *, struct net_device *) ndo_start_xmit
        20 |   netdev_features_t (*)(struct sk_buff *, struct net_device *, netdev_features_t) ndo_features_check
        24 |   u16 (*)(struct net_device *, struct sk_buff *, struct net_device *) ndo_select_queue
        28 |   void (*)(struct net_device *, int) ndo_change_rx_flags
        32 |   void (*)(struct net_device *) ndo_set_rx_mode
        36 |   int (*)(struct net_device *, void *) ndo_set_mac_address
        40 |   int (*)(struct net_device *) ndo_validate_addr
        44 |   int (*)(struct net_device *, struct ifreq *, int) ndo_do_ioctl
        48 |   int (*)(struct net_device *, struct ifreq *, int) ndo_eth_ioctl
        52 |   int (*)(struct net_device *, struct ifreq *, int) ndo_siocbond
        56 |   int (*)(struct net_device *, struct if_settings *) ndo_siocwandev
        60 |   int (*)(struct net_device *, struct ifreq *, void *, int) ndo_siocdevprivate
        64 |   int (*)(struct net_device *, struct ifmap *) ndo_set_config
        68 |   int (*)(struct net_device *, int) ndo_change_mtu
        72 |   int (*)(struct net_device *, struct neigh_parms *) ndo_neigh_setup
        76 |   void (*)(struct net_device *, unsigned int) ndo_tx_timeout
        80 |   void (*)(struct net_device *, struct rtnl_link_stats64 *) ndo_get_stats64
        84 |   bool (*)(const struct net_device *, int) ndo_has_offload_stats
        88 |   int (*)(int, const struct net_device *, void *) ndo_get_offload_stats
        92 |   struct net_device_stats *(*)(struct net_device *) ndo_get_stats
        96 |   int (*)(struct net_device *, __be16, u16) ndo_vlan_rx_add_vid
       100 |   int (*)(struct net_device *, __be16, u16) ndo_vlan_rx_kill_vid
       104 |   int (*)(struct net_device *, int, u8 *) ndo_set_vf_mac
       108 |   int (*)(struct net_device *, int, u16, u8, __be16) ndo_set_vf_vlan
       112 |   int (*)(struct net_device *, int, int, int) ndo_set_vf_rate
       116 |   int (*)(struct net_device *, int, bool) ndo_set_vf_spoofchk
       120 |   int (*)(struct net_device *, int, bool) ndo_set_vf_trust
       124 |   int (*)(struct net_device *, int, struct ifla_vf_info *) ndo_get_vf_config
       128 |   int (*)(struct net_device *, int, int) ndo_set_vf_link_state
       132 |   int (*)(struct net_device *, int, struct ifla_vf_stats *) ndo_get_vf_stats
       136 |   int (*)(struct net_device *, int, struct nlattr **) ndo_set_vf_port
       140 |   int (*)(struct net_device *, int, struct sk_buff *) ndo_get_vf_port
       144 |   int (*)(struct net_device *, int, struct ifla_vf_guid *, struct ifla_vf_guid *) ndo_get_vf_guid
       148 |   int (*)(struct net_device *, int, u64, int) ndo_set_vf_guid
       152 |   int (*)(struct net_device *, int, bool) ndo_set_vf_rss_query_en
       156 |   int (*)(struct net_device *, enum tc_setup_type, void *) ndo_setup_tc
       160 |   int (*)(struct net_device *, struct net_device *, struct netlink_ext_ack *) ndo_add_slave
       164 |   int (*)(struct net_device *, struct net_device *) ndo_del_slave
       168 |   struct net_device *(*)(struct net_device *, struct sk_buff *, bool) ndo_get_xmit_slave
       172 |   struct net_device *(*)(struct net_device *, struct sock *) ndo_sk_get_lower_dev
       176 |   netdev_features_t (*)(struct net_device *, netdev_features_t) ndo_fix_features
       180 |   int (*)(struct net_device *, netdev_features_t) ndo_set_features
       184 |   int (*)(struct net_device *, struct neighbour *) ndo_neigh_construct
       188 |   void (*)(struct net_device *, struct neighbour *) ndo_neigh_destroy
       192 |   int (*)(struct ndmsg *, struct nlattr **, struct net_device *, const unsigned char *, u16, u16, struct netlink_ext_ack *) ndo_fdb_add
       196 |   int (*)(struct ndmsg *, struct nlattr **, struct net_device *, const unsigned char *, u16, struct netlink_ext_ack *) ndo_fdb_del
       200 |   int (*)(struct nlmsghdr *, struct net_device *, struct netlink_ext_ack *) ndo_fdb_del_bulk
       204 |   int (*)(struct sk_buff *, struct netlink_callback *, struct net_device *, struct net_device *, int *) ndo_fdb_dump
       208 |   int (*)(struct sk_buff *, struct nlattr **, struct net_device *, const unsigned char *, u16, u32, u32, struct netlink_ext_ack *) ndo_fdb_get
       212 |   int (*)(struct net_device *, struct nlattr **, u16, struct netlink_ext_ack *) ndo_mdb_add
       216 |   int (*)(struct net_device *, struct nlattr **, struct netlink_ext_ack *) ndo_mdb_del
       220 |   int (*)(struct net_device *, struct nlattr **, struct netlink_ext_ack *) ndo_mdb_del_bulk
       224 |   int (*)(struct net_device *, struct sk_buff *, struct netlink_callback *) ndo_mdb_dump
       228 |   int (*)(struct net_device *, struct nlattr **, u32, u32, struct netlink_ext_ack *) ndo_mdb_get
       232 |   int (*)(struct net_device *, struct nlmsghdr *, u16, struct netlink_ext_ack *) ndo_bridge_setlink
       236 |   int (*)(struct sk_buff *, u32, u32, struct net_device *, u32, int) ndo_bridge_getlink
       240 |   int (*)(struct net_device *, struct nlmsghdr *, u16) ndo_bridge_dellink
       244 |   int (*)(struct net_device *, bool) ndo_change_carrier
       248 |   int (*)(struct net_device *, struct netdev_phys_item_id *) ndo_get_phys_port_id
       252 |   int (*)(struct net_device *, struct netdev_phys_item_id *) ndo_get_port_parent_id
       256 |   int (*)(struct net_device *, char *, size_t) ndo_get_phys_port_name
       260 |   void *(*)(struct net_device *, struct net_device *) ndo_dfwd_add_station
       264 |   void (*)(struct net_device *, void *) ndo_dfwd_del_station
       268 |   int (*)(struct net_device *, int, u32) ndo_set_tx_maxrate
       272 |   int (*)(const struct net_device *) ndo_get_iflink
       276 |   int (*)(struct net_device *, struct sk_buff *) ndo_fill_metadata_dst
       280 |   void (*)(struct net_device *, int) ndo_set_rx_headroom
       284 |   int (*)(struct net_device *, struct netdev_bpf *) ndo_bpf
       288 |   int (*)(struct net_device *, int, struct xdp_frame **, u32) ndo_xdp_xmit
       292 |   struct net_device *(*)(struct net_device *, struct xdp_buff *) ndo_xdp_get_xmit_slave
       296 |   int (*)(struct net_device *, u32, u32) ndo_xsk_wakeup
       300 |   int (*)(struct net_device *, struct ip_tunnel_parm_kern *, int) ndo_tunnel_ctl
       304 |   struct net_device *(*)(struct net_device *) ndo_get_peer_dev
       308 |   int (*)(struct net_device_path_ctx *, struct net_device_path *) ndo_fill_forward_path
       312 |   ktime_t (*)(struct net_device *, const struct skb_shared_hwtstamps *, bool) ndo_get_tstamp
       316 |   int (*)(struct net_device *, struct kernel_hwtstamp_config *) ndo_hwtstamp_get
       320 |   int (*)(struct net_device *, struct kernel_hwtstamp_config *, struct netlink_ext_ack *) ndo_hwtstamp_set
           | [sizeof=324, align=4]

*** Dumping AST Record Layout
         0 | struct poll_table_struct
         0 |   poll_queue_proc _qproc
         4 |   __poll_t _key
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct poll_table_entry
         0 |   struct file * filp
         4 |   __poll_t key
         8 |   struct wait_queue_entry wait
         8 |     unsigned int flags
        12 |     void * private
        16 |     wait_queue_func_t func
        20 |     struct list_head entry
        20 |       struct list_head * next
        24 |       struct list_head * prev
        28 |   wait_queue_head_t * wait_address
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct rta_session::(unnamed at ../include/uapi/linux/rtnetlink.h:523:3)
         0 |   __u16 sport
         2 |   __u16 dport
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | class_rtnl_t
         0 |   void * lock
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | rcuref_t
         0 |   atomic_t refcnt
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct nlattr
         0 |   __u16 nla_len
         2 |   __u16 nla_type
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct nla_policy::(anonymous at ../include/net/netlink.h:370:3)
         0 |   s16 min
         2 |   s16 max
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union nla_policy::(anonymous at ../include/net/netlink.h:339:2)
         0 |   u16 strict_start_type
         0 |   const u32 bitfield32_valid
         0 |   const u32 mask
         0 |   const char * reject_message
         0 |   const struct nla_policy * nested_policy
         0 |   const struct netlink_range_validation * range
         0 |   const struct netlink_range_validation_signed * range_signed
         0 |   struct nla_policy::(anonymous at ../include/net/netlink.h:370:3) 
         0 |     s16 min
         2 |     s16 max
         0 |   int (*)(const struct nlattr *, struct netlink_ext_ack *) validate
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct nla_policy
         0 |   u8 type
         1 |   u8 validation_type
         2 |   u16 len
         4 |   union nla_policy::(anonymous at ../include/net/netlink.h:339:2) 
         4 |     u16 strict_start_type
         4 |     const u32 bitfield32_valid
         4 |     const u32 mask
         4 |     const char * reject_message
         4 |     const struct nla_policy * nested_policy
         4 |     const struct netlink_range_validation * range
         4 |     const struct netlink_range_validation_signed * range_signed
         4 |     struct nla_policy::(anonymous at ../include/net/netlink.h:370:3) 
         4 |       s16 min
         6 |       s16 max
         4 |     int (*)(const struct nlattr *, struct netlink_ext_ack *) validate
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct netlink_ext_ack
         0 |   const char * _msg
         4 |   const struct nlattr * bad_attr
         8 |   const struct nla_policy * policy
        12 |   const struct nlattr * miss_nest
        16 |   u16 miss_type
        18 |   u8[20] cookie
        38 |   u8 cookie_len
        39 |   char[80] _msg_buf
           | [sizeof=120, align=4]

*** Dumping AST Record Layout
         0 | struct nla_bitfield32
         0 |   __u32 value
         4 |   __u32 selector
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct rtgenmsg
         0 |   unsigned char rtgen_family
           | [sizeof=1, align=1]

*** Dumping AST Record Layout
         0 | struct hh_cache
         0 |   unsigned int hh_len
         4 |   seqlock_t hh_lock
         4 |     struct seqcount_spinlock seqcount
         4 |       struct seqcount seqcount
         4 |         unsigned int sequence
         8 |         struct lockdep_map dep_map
         8 |           struct lock_class_key * key
        12 |           struct lock_class *[2] class_cache
        20 |           const char * name
        24 |           u8 wait_type_outer
        25 |           u8 wait_type_inner
        26 |           u8 lock_type
        28 |           int cpu
        32 |           unsigned long ip
        36 |       spinlock_t * lock
        40 |     struct spinlock lock
        40 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        40 |         struct raw_spinlock rlock
        40 |           arch_spinlock_t raw_lock
        40 |             volatile unsigned int slock
        44 |           unsigned int magic
        48 |           unsigned int owner_cpu
        52 |           void * owner
        56 |           struct lockdep_map dep_map
        56 |             struct lock_class_key * key
        60 |             struct lock_class *[2] class_cache
        68 |             const char * name
        72 |             u8 wait_type_outer
        73 |             u8 wait_type_inner
        74 |             u8 lock_type
        76 |             int cpu
        80 |             unsigned long ip
        40 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        40 |           u8[16] __padding
        56 |           struct lockdep_map dep_map
        56 |             struct lock_class_key * key
        60 |             struct lock_class *[2] class_cache
        68 |             const char * name
        72 |             u8 wait_type_outer
        73 |             u8 wait_type_inner
        74 |             u8 lock_type
        76 |             int cpu
        80 |             unsigned long ip
        84 |   unsigned long[24] hh_data
           | [sizeof=180, align=4]

*** Dumping AST Record Layout
         0 | struct neighbour
         0 |   struct neighbour * next
         4 |   struct neigh_table * tbl
         8 |   struct neigh_parms * parms
        12 |   unsigned long confirmed
        16 |   unsigned long updated
        20 |   rwlock_t lock
        20 |     arch_rwlock_t raw_lock
        20 |     unsigned int magic
        24 |     unsigned int owner_cpu
        28 |     void * owner
        32 |     struct lockdep_map dep_map
        32 |       struct lock_class_key * key
        36 |       struct lock_class *[2] class_cache
        44 |       const char * name
        48 |       u8 wait_type_outer
        49 |       u8 wait_type_inner
        50 |       u8 lock_type
        52 |       int cpu
        56 |       unsigned long ip
        60 |   struct refcount_struct refcnt
        60 |     atomic_t refs
        60 |       int counter
        64 |   unsigned int arp_queue_len_bytes
        68 |   struct sk_buff_head arp_queue
        68 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
        68 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
        68 |         struct sk_buff * next
        72 |         struct sk_buff * prev
        68 |       struct sk_buff_list list
        68 |         struct sk_buff * next
        72 |         struct sk_buff * prev
        76 |     __u32 qlen
        80 |     struct spinlock lock
        80 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        80 |         struct raw_spinlock rlock
        80 |           arch_spinlock_t raw_lock
        80 |             volatile unsigned int slock
        84 |           unsigned int magic
        88 |           unsigned int owner_cpu
        92 |           void * owner
        96 |           struct lockdep_map dep_map
        96 |             struct lock_class_key * key
       100 |             struct lock_class *[2] class_cache
       108 |             const char * name
       112 |             u8 wait_type_outer
       113 |             u8 wait_type_inner
       114 |             u8 lock_type
       116 |             int cpu
       120 |             unsigned long ip
        80 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        80 |           u8[16] __padding
        96 |           struct lockdep_map dep_map
        96 |             struct lock_class_key * key
       100 |             struct lock_class *[2] class_cache
       108 |             const char * name
       112 |             u8 wait_type_outer
       113 |             u8 wait_type_inner
       114 |             u8 lock_type
       116 |             int cpu
       120 |             unsigned long ip
       124 |   struct timer_list timer
       124 |     struct hlist_node entry
       124 |       struct hlist_node * next
       128 |       struct hlist_node ** pprev
       132 |     unsigned long expires
       136 |     void (*)(struct timer_list *) function
       140 |     u32 flags
       144 |     struct lockdep_map lockdep_map
       144 |       struct lock_class_key * key
       148 |       struct lock_class *[2] class_cache
       156 |       const char * name
       160 |       u8 wait_type_outer
       161 |       u8 wait_type_inner
       162 |       u8 lock_type
       164 |       int cpu
       168 |       unsigned long ip
       172 |   unsigned long used
       176 |   atomic_t probes
       176 |     int counter
       180 |   u8 nud_state
       181 |   u8 type
       182 |   u8 dead
       183 |   u8 protocol
       184 |   u32 flags
       188 |   seqlock_t ha_lock
       188 |     struct seqcount_spinlock seqcount
       188 |       struct seqcount seqcount
       188 |         unsigned int sequence
       192 |         struct lockdep_map dep_map
       192 |           struct lock_class_key * key
       196 |           struct lock_class *[2] class_cache
       204 |           const char * name
       208 |           u8 wait_type_outer
       209 |           u8 wait_type_inner
       210 |           u8 lock_type
       212 |           int cpu
       216 |           unsigned long ip
       220 |       spinlock_t * lock
       224 |     struct spinlock lock
       224 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       224 |         struct raw_spinlock rlock
       224 |           arch_spinlock_t raw_lock
       224 |             volatile unsigned int slock
       228 |           unsigned int magic
       232 |           unsigned int owner_cpu
       236 |           void * owner
       240 |           struct lockdep_map dep_map
       240 |             struct lock_class_key * key
       244 |             struct lock_class *[2] class_cache
       252 |             const char * name
       256 |             u8 wait_type_outer
       257 |             u8 wait_type_inner
       258 |             u8 lock_type
       260 |             int cpu
       264 |             unsigned long ip
       224 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       224 |           u8[16] __padding
       240 |           struct lockdep_map dep_map
       240 |             struct lock_class_key * key
       244 |             struct lock_class *[2] class_cache
       252 |             const char * name
       256 |             u8 wait_type_outer
       257 |             u8 wait_type_inner
       258 |             u8 lock_type
       260 |             int cpu
       264 |             unsigned long ip
       272 |   unsigned char[32] ha
       304 |   struct hh_cache hh
       304 |     unsigned int hh_len
       308 |     seqlock_t hh_lock
       308 |       struct seqcount_spinlock seqcount
       308 |         struct seqcount seqcount
       308 |           unsigned int sequence
       312 |           struct lockdep_map dep_map
       312 |             struct lock_class_key * key
       316 |             struct lock_class *[2] class_cache
       324 |             const char * name
       328 |             u8 wait_type_outer
       329 |             u8 wait_type_inner
       330 |             u8 lock_type
       332 |             int cpu
       336 |             unsigned long ip
       340 |         spinlock_t * lock
       344 |       struct spinlock lock
       344 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       344 |           struct raw_spinlock rlock
       344 |             arch_spinlock_t raw_lock
       344 |               volatile unsigned int slock
       348 |             unsigned int magic
       352 |             unsigned int owner_cpu
       356 |             void * owner
       360 |             struct lockdep_map dep_map
       360 |               struct lock_class_key * key
       364 |               struct lock_class *[2] class_cache
       372 |               const char * name
       376 |               u8 wait_type_outer
       377 |               u8 wait_type_inner
       378 |               u8 lock_type
       380 |               int cpu
       384 |               unsigned long ip
       344 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       344 |             u8[16] __padding
       360 |             struct lockdep_map dep_map
       360 |               struct lock_class_key * key
       364 |               struct lock_class *[2] class_cache
       372 |               const char * name
       376 |               u8 wait_type_outer
       377 |               u8 wait_type_inner
       378 |               u8 lock_type
       380 |               int cpu
       384 |               unsigned long ip
       388 |     unsigned long[24] hh_data
       484 |   int (*)(struct neighbour *, struct sk_buff *) output
       488 |   const struct neigh_ops * ops
       492 |   struct list_head gc_list
       492 |     struct list_head * next
       496 |     struct list_head * prev
       500 |   struct list_head managed_list
       500 |     struct list_head * next
       504 |     struct list_head * prev
       508 |   struct callback_head rcu
       508 |     struct callback_head * next
       512 |     void (*)(struct callback_head *) func
       516 |   struct net_device * dev
       520 |   netdevice_tracker dev_tracker
       524 |   u8[] primary_key
           | [sizeof=528, align=8]

*** Dumping AST Record Layout
         0 | struct neigh_parms
         0 |   possible_net_t net
         0 |     struct net * net
         4 |   struct net_device * dev
         8 |   netdevice_tracker dev_tracker
        12 |   struct list_head list
        12 |     struct list_head * next
        16 |     struct list_head * prev
        20 |   int (*)(struct neighbour *) neigh_setup
        24 |   struct neigh_table * tbl
        28 |   void * sysctl_table
        32 |   int dead
        36 |   struct refcount_struct refcnt
        36 |     atomic_t refs
        36 |       int counter
        40 |   struct callback_head callback_head
        40 |     struct callback_head * next
        44 |     void (*)(struct callback_head *) func
        48 |   int reachable_time
        52 |   u32 qlen
        56 |   int[14] data
       112 |   unsigned long[1] data_state
           | [sizeof=116, align=4]

*** Dumping AST Record Layout
         0 | struct neigh_table
         0 |   int family
         4 |   unsigned int entry_size
         8 |   unsigned int key_len
        12 |   __be16 protocol
        16 |   __u32 (*)(const void *, const struct net_device *, __u32 *) hash
        20 |   bool (*)(const struct neighbour *, const void *) key_eq
        24 |   int (*)(struct neighbour *) constructor
        28 |   int (*)(struct pneigh_entry *) pconstructor
        32 |   void (*)(struct pneigh_entry *) pdestructor
        36 |   void (*)(struct sk_buff *) proxy_redo
        40 |   int (*)(const void *) is_multicast
        44 |   bool (*)(const struct net_device *, struct netlink_ext_ack *) allow_add
        48 |   char * id
        52 |   struct neigh_parms parms
        52 |     possible_net_t net
        52 |       struct net * net
        56 |     struct net_device * dev
        60 |     netdevice_tracker dev_tracker
        64 |     struct list_head list
        64 |       struct list_head * next
        68 |       struct list_head * prev
        72 |     int (*)(struct neighbour *) neigh_setup
        76 |     struct neigh_table * tbl
        80 |     void * sysctl_table
        84 |     int dead
        88 |     struct refcount_struct refcnt
        88 |       atomic_t refs
        88 |         int counter
        92 |     struct callback_head callback_head
        92 |       struct callback_head * next
        96 |       void (*)(struct callback_head *) func
       100 |     int reachable_time
       104 |     u32 qlen
       108 |     int[14] data
       164 |     unsigned long[1] data_state
       168 |   struct list_head parms_list
       168 |     struct list_head * next
       172 |     struct list_head * prev
       176 |   int gc_interval
       180 |   int gc_thresh1
       184 |   int gc_thresh2
       188 |   int gc_thresh3
       192 |   unsigned long last_flush
       196 |   struct delayed_work gc_work
       196 |     struct work_struct work
       196 |       atomic_t data
       196 |         int counter
       200 |       struct list_head entry
       200 |         struct list_head * next
       204 |         struct list_head * prev
       208 |       work_func_t func
       212 |       struct lockdep_map lockdep_map
       212 |         struct lock_class_key * key
       216 |         struct lock_class *[2] class_cache
       224 |         const char * name
       228 |         u8 wait_type_outer
       229 |         u8 wait_type_inner
       230 |         u8 lock_type
       232 |         int cpu
       236 |         unsigned long ip
       240 |     struct timer_list timer
       240 |       struct hlist_node entry
       240 |         struct hlist_node * next
       244 |         struct hlist_node ** pprev
       248 |       unsigned long expires
       252 |       void (*)(struct timer_list *) function
       256 |       u32 flags
       260 |       struct lockdep_map lockdep_map
       260 |         struct lock_class_key * key
       264 |         struct lock_class *[2] class_cache
       272 |         const char * name
       276 |         u8 wait_type_outer
       277 |         u8 wait_type_inner
       278 |         u8 lock_type
       280 |         int cpu
       284 |         unsigned long ip
       288 |     struct workqueue_struct * wq
       292 |     int cpu
       296 |   struct delayed_work managed_work
       296 |     struct work_struct work
       296 |       atomic_t data
       296 |         int counter
       300 |       struct list_head entry
       300 |         struct list_head * next
       304 |         struct list_head * prev
       308 |       work_func_t func
       312 |       struct lockdep_map lockdep_map
       312 |         struct lock_class_key * key
       316 |         struct lock_class *[2] class_cache
       324 |         const char * name
       328 |         u8 wait_type_outer
       329 |         u8 wait_type_inner
       330 |         u8 lock_type
       332 |         int cpu
       336 |         unsigned long ip
       340 |     struct timer_list timer
       340 |       struct hlist_node entry
       340 |         struct hlist_node * next
       344 |         struct hlist_node ** pprev
       348 |       unsigned long expires
       352 |       void (*)(struct timer_list *) function
       356 |       u32 flags
       360 |       struct lockdep_map lockdep_map
       360 |         struct lock_class_key * key
       364 |         struct lock_class *[2] class_cache
       372 |         const char * name
       376 |         u8 wait_type_outer
       377 |         u8 wait_type_inner
       378 |         u8 lock_type
       380 |         int cpu
       384 |         unsigned long ip
       388 |     struct workqueue_struct * wq
       392 |     int cpu
       396 |   struct timer_list proxy_timer
       396 |     struct hlist_node entry
       396 |       struct hlist_node * next
       400 |       struct hlist_node ** pprev
       404 |     unsigned long expires
       408 |     void (*)(struct timer_list *) function
       412 |     u32 flags
       416 |     struct lockdep_map lockdep_map
       416 |       struct lock_class_key * key
       420 |       struct lock_class *[2] class_cache
       428 |       const char * name
       432 |       u8 wait_type_outer
       433 |       u8 wait_type_inner
       434 |       u8 lock_type
       436 |       int cpu
       440 |       unsigned long ip
       444 |   struct sk_buff_head proxy_queue
       444 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       444 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       444 |         struct sk_buff * next
       448 |         struct sk_buff * prev
       444 |       struct sk_buff_list list
       444 |         struct sk_buff * next
       448 |         struct sk_buff * prev
       452 |     __u32 qlen
       456 |     struct spinlock lock
       456 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       456 |         struct raw_spinlock rlock
       456 |           arch_spinlock_t raw_lock
       456 |             volatile unsigned int slock
       460 |           unsigned int magic
       464 |           unsigned int owner_cpu
       468 |           void * owner
       472 |           struct lockdep_map dep_map
       472 |             struct lock_class_key * key
       476 |             struct lock_class *[2] class_cache
       484 |             const char * name
       488 |             u8 wait_type_outer
       489 |             u8 wait_type_inner
       490 |             u8 lock_type
       492 |             int cpu
       496 |             unsigned long ip
       456 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       456 |           u8[16] __padding
       472 |           struct lockdep_map dep_map
       472 |             struct lock_class_key * key
       476 |             struct lock_class *[2] class_cache
       484 |             const char * name
       488 |             u8 wait_type_outer
       489 |             u8 wait_type_inner
       490 |             u8 lock_type
       492 |             int cpu
       496 |             unsigned long ip
       500 |   atomic_t entries
       500 |     int counter
       504 |   atomic_t gc_entries
       504 |     int counter
       508 |   struct list_head gc_list
       508 |     struct list_head * next
       512 |     struct list_head * prev
       516 |   struct list_head managed_list
       516 |     struct list_head * next
       520 |     struct list_head * prev
       524 |   rwlock_t lock
       524 |     arch_rwlock_t raw_lock
       524 |     unsigned int magic
       528 |     unsigned int owner_cpu
       532 |     void * owner
       536 |     struct lockdep_map dep_map
       536 |       struct lock_class_key * key
       540 |       struct lock_class *[2] class_cache
       548 |       const char * name
       552 |       u8 wait_type_outer
       553 |       u8 wait_type_inner
       554 |       u8 lock_type
       556 |       int cpu
       560 |       unsigned long ip
       564 |   unsigned long last_rand
       568 |   struct neigh_statistics * stats
       572 |   struct neigh_hash_table * nht
       576 |   struct pneigh_entry ** phash_buckets
           | [sizeof=580, align=4]

*** Dumping AST Record Layout
         0 | struct seq_net_private
         0 |   struct net * net
         4 |   netns_tracker ns_tracker
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct dst_entry
         0 |   struct net_device * dev
         4 |   struct dst_ops * ops
         8 |   unsigned long _metrics
        12 |   unsigned long expires
        16 |   struct xfrm_state * xfrm
        20 |   int (*)(struct sk_buff *) input
        24 |   int (*)(struct net *, struct sock *, struct sk_buff *) output
        28 |   unsigned short flags
        30 |   short obsolete
        32 |   unsigned short header_len
        34 |   unsigned short trailer_len
        36 |   int __use
        40 |   unsigned long lastuse
        44 |   struct callback_head callback_head
        44 |     struct callback_head * next
        48 |     void (*)(struct callback_head *) func
        52 |   short error
        54 |   short __pad
        56 |   __u32 tclassid
        60 |   struct lwtunnel_state * lwtstate
        64 |   rcuref_t __rcuref
        64 |     atomic_t refcnt
        64 |       int counter
        68 |   netdevice_tracker dev_tracker
        72 |   struct list_head rt_uncached
        72 |     struct list_head * next
        76 |     struct list_head * prev
        80 |   struct uncached_list * rt_uncached_list
           | [sizeof=84, align=4]

*** Dumping AST Record Layout
         0 | struct fib_notifier_info
         0 |   int family
         4 |   struct netlink_ext_ack * extack
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct fib_kuid_range
         0 |   kuid_t start
         0 |     uid_t val
         4 |   kuid_t end
         4 |     uid_t val
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct fib_rule_port_range
         0 |   __u16 start
         2 |   __u16 end
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct fib_rule
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   int iifindex
        12 |   int oifindex
        16 |   u32 mark
        20 |   u32 mark_mask
        24 |   u32 flags
        28 |   u32 table
        32 |   u8 action
        33 |   u8 l3mdev
        34 |   u8 proto
        35 |   u8 ip_proto
        36 |   u32 target
        40 |   __be64 tun_id
        48 |   struct fib_rule * ctarget
        52 |   struct net * fr_net
        56 |   struct refcount_struct refcnt
        56 |     atomic_t refs
        56 |       int counter
        60 |   u32 pref
        64 |   int suppress_ifgroup
        68 |   int suppress_prefixlen
        72 |   char[16] iifname
        88 |   char[16] oifname
       104 |   struct fib_kuid_range uid_range
       104 |     kuid_t start
       104 |       uid_t val
       108 |     kuid_t end
       108 |       uid_t val
       112 |   struct fib_rule_port_range sport_range
       112 |     __u16 start
       114 |     __u16 end
       116 |   struct fib_rule_port_range dport_range
       116 |     __u16 start
       118 |     __u16 end
       120 |   struct callback_head rcu
       120 |     struct callback_head * next
       124 |     void (*)(struct callback_head *) func
           | [sizeof=128, align=8]

*** Dumping AST Record Layout
         0 | struct sock_common::(anonymous at ../include/net/sock.h:153:3)
         0 |   __be32 skc_daddr
         4 |   __be32 skc_rcv_saddr
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:151:2)
         0 |   __addrpair skc_addrpair
         0 |   struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |     __be32 skc_daddr
         4 |     __be32 skc_rcv_saddr
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:158:2)
         0 |   unsigned int skc_hash
         0 |   __u16[2] skc_u16hashes
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sock_common::(anonymous at ../include/net/sock.h:165:3)
         0 |   __be16 skc_dport
         2 |   __u16 skc_num
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:163:2)
         0 |   __portpair skc_portpair
         0 |   struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
         0 |     __be16 skc_dport
         2 |     __u16 skc_num
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:178:2)
         0 |   struct hlist_node skc_bind_node
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         0 |   struct hlist_node skc_portaddr_node
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:197:2)
         0 |   unsigned long skc_flags
         0 |   struct sock * skc_listener
         0 |   struct inet_timewait_death_row * skc_tw_dr
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:209:2)
         0 |   struct hlist_node skc_node
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         0 |   struct hlist_nulls_node skc_nulls_node
         0 |     struct hlist_nulls_node * next
         4 |     struct hlist_nulls_node ** pprev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:217:2)
         0 |   int skc_incoming_cpu
         0 |   u32 skc_rcv_wnd
         0 |   u32 skc_tw_rcv_nxt
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union sock_common::(anonymous at ../include/net/sock.h:226:2)
         0 |   u32 skc_rxhash
         0 |   u32 skc_window_clamp
         0 |   u32 skc_tw_snd_nxt
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sock_common
         0 |   union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |     __addrpair skc_addrpair
         0 |     struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |       __be32 skc_daddr
         4 |       __be32 skc_rcv_saddr
         8 |   union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |     unsigned int skc_hash
         8 |     __u16[2] skc_u16hashes
        12 |   union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |     __portpair skc_portpair
        12 |     struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |       __be16 skc_dport
        14 |       __u16 skc_num
        16 |   unsigned short skc_family
        18 |   volatile unsigned char skc_state
    19:0-3 |   unsigned char skc_reuse
    19:4-4 |   unsigned char skc_reuseport
    19:5-5 |   unsigned char skc_ipv6only
    19:6-6 |   unsigned char skc_net_refcnt
        20 |   int skc_bound_dev_if
        24 |   union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |     struct hlist_node skc_bind_node
        24 |       struct hlist_node * next
        28 |       struct hlist_node ** pprev
        24 |     struct hlist_node skc_portaddr_node
        24 |       struct hlist_node * next
        28 |       struct hlist_node ** pprev
        32 |   struct proto * skc_prot
        36 |   possible_net_t skc_net
        36 |     struct net * net
        40 |   struct in6_addr skc_v6_daddr
        40 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |       __u8[16] u6_addr8
        40 |       __be16[8] u6_addr16
        40 |       __be32[4] u6_addr32
        56 |   struct in6_addr skc_v6_rcv_saddr
        56 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |       __u8[16] u6_addr8
        56 |       __be16[8] u6_addr16
        56 |       __be32[4] u6_addr32
        72 |   atomic64_t skc_cookie
        72 |     s64 counter
        80 |   union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |     unsigned long skc_flags
        80 |     struct sock * skc_listener
        80 |     struct inet_timewait_death_row * skc_tw_dr
        84 |   int[0] skc_dontcopy_begin
        84 |   union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |     struct hlist_node skc_node
        84 |       struct hlist_node * next
        88 |       struct hlist_node ** pprev
        84 |     struct hlist_nulls_node skc_nulls_node
        84 |       struct hlist_nulls_node * next
        88 |       struct hlist_nulls_node ** pprev
        92 |   unsigned short skc_tx_queue_mapping
        94 |   unsigned short skc_rx_queue_mapping
        96 |   union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |     int skc_incoming_cpu
        96 |     u32 skc_rcv_wnd
        96 |     u32 skc_tw_rcv_nxt
       100 |   struct refcount_struct skc_refcnt
       100 |     atomic_t refs
       100 |       int counter
       104 |   int[0] skc_dontcopy_end
       104 |   union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |     u32 skc_rxhash
       104 |     u32 skc_window_clamp
       104 |     u32 skc_tw_snd_nxt
           | [sizeof=112, align=8]

*** Dumping AST Record Layout
         0 | struct sock::(unnamed at ../include/net/sock.h:395:2)
         0 |   atomic_t rmem_alloc
         0 |     int counter
         4 |   int len
         8 |   struct sk_buff * head
        12 |   struct sk_buff * tail
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union sock::(anonymous at ../include/net/sock.h:421:2)
         0 |   struct socket_wq * sk_wq
         0 |   struct socket_wq * sk_wq_raw
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | socket_lock_t
         0 |   struct spinlock slock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   int owned
        48 |   struct wait_queue_head wq
        48 |     struct spinlock lock
        48 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        48 |         struct raw_spinlock rlock
        48 |           arch_spinlock_t raw_lock
        48 |             volatile unsigned int slock
        52 |           unsigned int magic
        56 |           unsigned int owner_cpu
        60 |           void * owner
        64 |           struct lockdep_map dep_map
        64 |             struct lock_class_key * key
        68 |             struct lock_class *[2] class_cache
        76 |             const char * name
        80 |             u8 wait_type_outer
        81 |             u8 wait_type_inner
        82 |             u8 lock_type
        84 |             int cpu
        88 |             unsigned long ip
        48 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        48 |           u8[16] __padding
        64 |           struct lockdep_map dep_map
        64 |             struct lock_class_key * key
        68 |             struct lock_class *[2] class_cache
        76 |             const char * name
        80 |             u8 wait_type_outer
        81 |             u8 wait_type_inner
        82 |             u8 lock_type
        84 |             int cpu
        88 |             unsigned long ip
        92 |     struct list_head head
        92 |       struct list_head * next
        96 |       struct list_head * prev
       100 |   struct lockdep_map dep_map
       100 |     struct lock_class_key * key
       104 |     struct lock_class *[2] class_cache
       112 |     const char * name
       116 |     u8 wait_type_outer
       117 |     u8 wait_type_inner
       118 |     u8 lock_type
       120 |     int cpu
       124 |     unsigned long ip
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | union sock::(anonymous at ../include/net/sock.h:457:2)
         0 |   struct sk_buff * sk_send_head
         0 |   struct rb_root tcp_rtx_queue
         0 |     struct rb_node * rb_node
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sock_cgroup_data
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct sock
         0 |   struct sock_common __sk_common
         0 |     union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |       __addrpair skc_addrpair
         0 |       struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |         __be32 skc_daddr
         4 |         __be32 skc_rcv_saddr
         8 |     union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |       unsigned int skc_hash
         8 |       __u16[2] skc_u16hashes
        12 |     union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |       __portpair skc_portpair
        12 |       struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |         __be16 skc_dport
        14 |         __u16 skc_num
        16 |     unsigned short skc_family
        18 |     volatile unsigned char skc_state
    19:0-3 |     unsigned char skc_reuse
    19:4-4 |     unsigned char skc_reuseport
    19:5-5 |     unsigned char skc_ipv6only
    19:6-6 |     unsigned char skc_net_refcnt
        20 |     int skc_bound_dev_if
        24 |     union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |       struct hlist_node skc_bind_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        24 |       struct hlist_node skc_portaddr_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        32 |     struct proto * skc_prot
        36 |     possible_net_t skc_net
        36 |       struct net * net
        40 |     struct in6_addr skc_v6_daddr
        40 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |         __u8[16] u6_addr8
        40 |         __be16[8] u6_addr16
        40 |         __be32[4] u6_addr32
        56 |     struct in6_addr skc_v6_rcv_saddr
        56 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |         __u8[16] u6_addr8
        56 |         __be16[8] u6_addr16
        56 |         __be32[4] u6_addr32
        72 |     atomic64_t skc_cookie
        72 |       s64 counter
        80 |     union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |       unsigned long skc_flags
        80 |       struct sock * skc_listener
        80 |       struct inet_timewait_death_row * skc_tw_dr
        84 |     int[0] skc_dontcopy_begin
        84 |     union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |       struct hlist_node skc_node
        84 |         struct hlist_node * next
        88 |         struct hlist_node ** pprev
        84 |       struct hlist_nulls_node skc_nulls_node
        84 |         struct hlist_nulls_node * next
        88 |         struct hlist_nulls_node ** pprev
        92 |     unsigned short skc_tx_queue_mapping
        94 |     unsigned short skc_rx_queue_mapping
        96 |     union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |       int skc_incoming_cpu
        96 |       u32 skc_rcv_wnd
        96 |       u32 skc_tw_rcv_nxt
       100 |     struct refcount_struct skc_refcnt
       100 |       atomic_t refs
       100 |         int counter
       104 |     int[0] skc_dontcopy_end
       104 |     union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |       u32 skc_rxhash
       104 |       u32 skc_window_clamp
       104 |       u32 skc_tw_snd_nxt
       112 |   __u8[0] __cacheline_group_begin__sock_write_rx
       112 |   atomic_t sk_drops
       112 |     int counter
       116 |   __s32 sk_peek_off
       120 |   struct sk_buff_head sk_error_queue
       120 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |         struct sk_buff * next
       124 |         struct sk_buff * prev
       120 |       struct sk_buff_list list
       120 |         struct sk_buff * next
       124 |         struct sk_buff * prev
       128 |     __u32 qlen
       132 |     struct spinlock lock
       132 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       132 |         struct raw_spinlock rlock
       132 |           arch_spinlock_t raw_lock
       132 |             volatile unsigned int slock
       136 |           unsigned int magic
       140 |           unsigned int owner_cpu
       144 |           void * owner
       148 |           struct lockdep_map dep_map
       148 |             struct lock_class_key * key
       152 |             struct lock_class *[2] class_cache
       160 |             const char * name
       164 |             u8 wait_type_outer
       165 |             u8 wait_type_inner
       166 |             u8 lock_type
       168 |             int cpu
       172 |             unsigned long ip
       132 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       132 |           u8[16] __padding
       148 |           struct lockdep_map dep_map
       148 |             struct lock_class_key * key
       152 |             struct lock_class *[2] class_cache
       160 |             const char * name
       164 |             u8 wait_type_outer
       165 |             u8 wait_type_inner
       166 |             u8 lock_type
       168 |             int cpu
       172 |             unsigned long ip
       176 |   struct sk_buff_head sk_receive_queue
       176 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |         struct sk_buff * next
       180 |         struct sk_buff * prev
       176 |       struct sk_buff_list list
       176 |         struct sk_buff * next
       180 |         struct sk_buff * prev
       184 |     __u32 qlen
       188 |     struct spinlock lock
       188 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       188 |         struct raw_spinlock rlock
       188 |           arch_spinlock_t raw_lock
       188 |             volatile unsigned int slock
       192 |           unsigned int magic
       196 |           unsigned int owner_cpu
       200 |           void * owner
       204 |           struct lockdep_map dep_map
       204 |             struct lock_class_key * key
       208 |             struct lock_class *[2] class_cache
       216 |             const char * name
       220 |             u8 wait_type_outer
       221 |             u8 wait_type_inner
       222 |             u8 lock_type
       224 |             int cpu
       228 |             unsigned long ip
       188 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       188 |           u8[16] __padding
       204 |           struct lockdep_map dep_map
       204 |             struct lock_class_key * key
       208 |             struct lock_class *[2] class_cache
       216 |             const char * name
       220 |             u8 wait_type_outer
       221 |             u8 wait_type_inner
       222 |             u8 lock_type
       224 |             int cpu
       228 |             unsigned long ip
       232 |   struct sock::(unnamed at ../include/net/sock.h:395:2) sk_backlog
       232 |     atomic_t rmem_alloc
       232 |       int counter
       236 |     int len
       240 |     struct sk_buff * head
       244 |     struct sk_buff * tail
       248 |   __u8[0] __cacheline_group_end__sock_write_rx
       248 |   __u8[0] __cacheline_group_begin__sock_read_rx
       248 |   struct dst_entry * sk_rx_dst
       252 |   int sk_rx_dst_ifindex
       256 |   u32 sk_rx_dst_cookie
       260 |   unsigned int sk_ll_usec
       264 |   unsigned int sk_napi_id
       268 |   u16 sk_busy_poll_budget
       270 |   u8 sk_prefer_busy_poll
       271 |   u8 sk_userlocks
       272 |   int sk_rcvbuf
       276 |   struct sk_filter * sk_filter
       280 |   union sock::(anonymous at ../include/net/sock.h:421:2) 
       280 |     struct socket_wq * sk_wq
       280 |     struct socket_wq * sk_wq_raw
       284 |   void (*)(struct sock *) sk_data_ready
       288 |   long sk_rcvtimeo
       292 |   int sk_rcvlowat
       296 |   __u8[0] __cacheline_group_end__sock_read_rx
       296 |   __u8[0] __cacheline_group_begin__sock_read_rxtx
       296 |   int sk_err
       300 |   struct socket * sk_socket
       304 |   struct mem_cgroup * sk_memcg
       308 |   struct xfrm_policy *[2] sk_policy
       316 |   __u8[0] __cacheline_group_end__sock_read_rxtx
       316 |   __u8[0] __cacheline_group_begin__sock_write_rxtx
       316 |   socket_lock_t sk_lock
       316 |     struct spinlock slock
       316 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       316 |         struct raw_spinlock rlock
       316 |           arch_spinlock_t raw_lock
       316 |             volatile unsigned int slock
       320 |           unsigned int magic
       324 |           unsigned int owner_cpu
       328 |           void * owner
       332 |           struct lockdep_map dep_map
       332 |             struct lock_class_key * key
       336 |             struct lock_class *[2] class_cache
       344 |             const char * name
       348 |             u8 wait_type_outer
       349 |             u8 wait_type_inner
       350 |             u8 lock_type
       352 |             int cpu
       356 |             unsigned long ip
       316 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       316 |           u8[16] __padding
       332 |           struct lockdep_map dep_map
       332 |             struct lock_class_key * key
       336 |             struct lock_class *[2] class_cache
       344 |             const char * name
       348 |             u8 wait_type_outer
       349 |             u8 wait_type_inner
       350 |             u8 lock_type
       352 |             int cpu
       356 |             unsigned long ip
       360 |     int owned
       364 |     struct wait_queue_head wq
       364 |       struct spinlock lock
       364 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       364 |           struct raw_spinlock rlock
       364 |             arch_spinlock_t raw_lock
       364 |               volatile unsigned int slock
       368 |             unsigned int magic
       372 |             unsigned int owner_cpu
       376 |             void * owner
       380 |             struct lockdep_map dep_map
       380 |               struct lock_class_key * key
       384 |               struct lock_class *[2] class_cache
       392 |               const char * name
       396 |               u8 wait_type_outer
       397 |               u8 wait_type_inner
       398 |               u8 lock_type
       400 |               int cpu
       404 |               unsigned long ip
       364 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       364 |             u8[16] __padding
       380 |             struct lockdep_map dep_map
       380 |               struct lock_class_key * key
       384 |               struct lock_class *[2] class_cache
       392 |               const char * name
       396 |               u8 wait_type_outer
       397 |               u8 wait_type_inner
       398 |               u8 lock_type
       400 |               int cpu
       404 |               unsigned long ip
       408 |       struct list_head head
       408 |         struct list_head * next
       412 |         struct list_head * prev
       416 |     struct lockdep_map dep_map
       416 |       struct lock_class_key * key
       420 |       struct lock_class *[2] class_cache
       428 |       const char * name
       432 |       u8 wait_type_outer
       433 |       u8 wait_type_inner
       434 |       u8 lock_type
       436 |       int cpu
       440 |       unsigned long ip
       444 |   u32 sk_reserved_mem
       448 |   int sk_forward_alloc
       452 |   u32 sk_tsflags
       456 |   __u8[0] __cacheline_group_end__sock_write_rxtx
       456 |   __u8[0] __cacheline_group_begin__sock_write_tx
       456 |   int sk_write_pending
       460 |   atomic_t sk_omem_alloc
       460 |     int counter
       464 |   int sk_sndbuf
       468 |   int sk_wmem_queued
       472 |   struct refcount_struct sk_wmem_alloc
       472 |     atomic_t refs
       472 |       int counter
       476 |   unsigned long sk_tsq_flags
       480 |   union sock::(anonymous at ../include/net/sock.h:457:2) 
       480 |     struct sk_buff * sk_send_head
       480 |     struct rb_root tcp_rtx_queue
       480 |       struct rb_node * rb_node
       484 |   struct sk_buff_head sk_write_queue
       484 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |         struct sk_buff * next
       488 |         struct sk_buff * prev
       484 |       struct sk_buff_list list
       484 |         struct sk_buff * next
       488 |         struct sk_buff * prev
       492 |     __u32 qlen
       496 |     struct spinlock lock
       496 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       496 |         struct raw_spinlock rlock
       496 |           arch_spinlock_t raw_lock
       496 |             volatile unsigned int slock
       500 |           unsigned int magic
       504 |           unsigned int owner_cpu
       508 |           void * owner
       512 |           struct lockdep_map dep_map
       512 |             struct lock_class_key * key
       516 |             struct lock_class *[2] class_cache
       524 |             const char * name
       528 |             u8 wait_type_outer
       529 |             u8 wait_type_inner
       530 |             u8 lock_type
       532 |             int cpu
       536 |             unsigned long ip
       496 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       496 |           u8[16] __padding
       512 |           struct lockdep_map dep_map
       512 |             struct lock_class_key * key
       516 |             struct lock_class *[2] class_cache
       524 |             const char * name
       528 |             u8 wait_type_outer
       529 |             u8 wait_type_inner
       530 |             u8 lock_type
       532 |             int cpu
       536 |             unsigned long ip
       540 |   u32 sk_dst_pending_confirm
       544 |   u32 sk_pacing_status
       548 |   struct page_frag sk_frag
       548 |     struct page * page
       552 |     __u16 offset
       554 |     __u16 size
       556 |   struct timer_list sk_timer
       556 |     struct hlist_node entry
       556 |       struct hlist_node * next
       560 |       struct hlist_node ** pprev
       564 |     unsigned long expires
       568 |     void (*)(struct timer_list *) function
       572 |     u32 flags
       576 |     struct lockdep_map lockdep_map
       576 |       struct lock_class_key * key
       580 |       struct lock_class *[2] class_cache
       588 |       const char * name
       592 |       u8 wait_type_outer
       593 |       u8 wait_type_inner
       594 |       u8 lock_type
       596 |       int cpu
       600 |       unsigned long ip
       604 |   unsigned long sk_pacing_rate
       608 |   atomic_t sk_zckey
       608 |     int counter
       612 |   atomic_t sk_tskey
       612 |     int counter
       616 |   __u8[0] __cacheline_group_end__sock_write_tx
       616 |   __u8[0] __cacheline_group_begin__sock_read_tx
       616 |   unsigned long sk_max_pacing_rate
       620 |   long sk_sndtimeo
       624 |   u32 sk_priority
       628 |   u32 sk_mark
       632 |   struct dst_entry * sk_dst_cache
       640 |   netdev_features_t sk_route_caps
       648 |   struct sk_buff *(*)(struct sock *, struct net_device *, struct sk_buff *) sk_validate_xmit_skb
       652 |   u16 sk_gso_type
       654 |   u16 sk_gso_max_segs
       656 |   unsigned int sk_gso_max_size
       660 |   gfp_t sk_allocation
       664 |   u32 sk_txhash
       668 |   u8 sk_pacing_shift
       669 |   bool sk_use_task_frag
       670 |   __u8[0] __cacheline_group_end__sock_read_tx
   670:0-0 |   u8 sk_gso_disabled
   670:1-1 |   u8 sk_kern_sock
   670:2-2 |   u8 sk_no_check_tx
   670:3-3 |   u8 sk_no_check_rx
       671 |   u8 sk_shutdown
       672 |   u16 sk_type
       674 |   u16 sk_protocol
       676 |   unsigned long sk_lingertime
       680 |   struct proto * sk_prot_creator
       684 |   rwlock_t sk_callback_lock
       684 |     arch_rwlock_t raw_lock
       684 |     unsigned int magic
       688 |     unsigned int owner_cpu
       692 |     void * owner
       696 |     struct lockdep_map dep_map
       696 |       struct lock_class_key * key
       700 |       struct lock_class *[2] class_cache
       708 |       const char * name
       712 |       u8 wait_type_outer
       713 |       u8 wait_type_inner
       714 |       u8 lock_type
       716 |       int cpu
       720 |       unsigned long ip
       724 |   int sk_err_soft
       728 |   u32 sk_ack_backlog
       732 |   u32 sk_max_ack_backlog
       736 |   kuid_t sk_uid
       736 |     uid_t val
       740 |   struct spinlock sk_peer_lock
       740 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       740 |       struct raw_spinlock rlock
       740 |         arch_spinlock_t raw_lock
       740 |           volatile unsigned int slock
       744 |         unsigned int magic
       748 |         unsigned int owner_cpu
       752 |         void * owner
       756 |         struct lockdep_map dep_map
       756 |           struct lock_class_key * key
       760 |           struct lock_class *[2] class_cache
       768 |           const char * name
       772 |           u8 wait_type_outer
       773 |           u8 wait_type_inner
       774 |           u8 lock_type
       776 |           int cpu
       780 |           unsigned long ip
       740 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       740 |         u8[16] __padding
       756 |         struct lockdep_map dep_map
       756 |           struct lock_class_key * key
       760 |           struct lock_class *[2] class_cache
       768 |           const char * name
       772 |           u8 wait_type_outer
       773 |           u8 wait_type_inner
       774 |           u8 lock_type
       776 |           int cpu
       780 |           unsigned long ip
       784 |   int sk_bind_phc
       788 |   struct pid * sk_peer_pid
       792 |   const struct cred * sk_peer_cred
       800 |   ktime_t sk_stamp
       808 |   seqlock_t sk_stamp_seq
       808 |     struct seqcount_spinlock seqcount
       808 |       struct seqcount seqcount
       808 |         unsigned int sequence
       812 |         struct lockdep_map dep_map
       812 |           struct lock_class_key * key
       816 |           struct lock_class *[2] class_cache
       824 |           const char * name
       828 |           u8 wait_type_outer
       829 |           u8 wait_type_inner
       830 |           u8 lock_type
       832 |           int cpu
       836 |           unsigned long ip
       840 |       spinlock_t * lock
       844 |     struct spinlock lock
       844 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |         struct raw_spinlock rlock
       844 |           arch_spinlock_t raw_lock
       844 |             volatile unsigned int slock
       848 |           unsigned int magic
       852 |           unsigned int owner_cpu
       856 |           void * owner
       860 |           struct lockdep_map dep_map
       860 |             struct lock_class_key * key
       864 |             struct lock_class *[2] class_cache
       872 |             const char * name
       876 |             u8 wait_type_outer
       877 |             u8 wait_type_inner
       878 |             u8 lock_type
       880 |             int cpu
       884 |             unsigned long ip
       844 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |           u8[16] __padding
       860 |           struct lockdep_map dep_map
       860 |             struct lock_class_key * key
       864 |             struct lock_class *[2] class_cache
       872 |             const char * name
       876 |             u8 wait_type_outer
       877 |             u8 wait_type_inner
       878 |             u8 lock_type
       880 |             int cpu
       884 |             unsigned long ip
       888 |   int sk_disconnects
       892 |   u8 sk_txrehash
       893 |   u8 sk_clockid
   894:0-0 |   u8 sk_txtime_deadline_mode
   894:1-1 |   u8 sk_txtime_report_errors
   894:2-7 |   u8 sk_txtime_unused
       896 |   void * sk_user_data
       900 |   struct sock_cgroup_data sk_cgrp_data
       900 |   void (*)(struct sock *) sk_state_change
       904 |   void (*)(struct sock *) sk_write_space
       908 |   void (*)(struct sock *) sk_error_report
       912 |   int (*)(struct sock *, struct sk_buff *) sk_backlog_rcv
       916 |   void (*)(struct sock *) sk_destruct
       920 |   struct sock_reuseport * sk_reuseport_cb
       924 |   struct bpf_local_storage * sk_bpf_storage
       928 |   struct callback_head sk_rcu
       928 |     struct callback_head * next
       932 |     void (*)(struct callback_head *) func
       936 |   netns_tracker ns_tracker
           | [sizeof=944, align=8]

*** Dumping AST Record Layout
         0 | struct socket_alloc
         0 |   struct socket socket
         0 |     socket_state state
         4 |     short type
         8 |     unsigned long flags
        12 |     struct file * file
        16 |     struct sock * sk
        20 |     const struct proto_ops * ops
        24 |     struct socket_wq wq
        24 |       struct wait_queue_head wait
        24 |         struct spinlock lock
        24 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        24 |             struct raw_spinlock rlock
        24 |               arch_spinlock_t raw_lock
        24 |                 volatile unsigned int slock
        28 |               unsigned int magic
        32 |               unsigned int owner_cpu
        36 |               void * owner
        40 |               struct lockdep_map dep_map
        40 |                 struct lock_class_key * key
        44 |                 struct lock_class *[2] class_cache
        52 |                 const char * name
        56 |                 u8 wait_type_outer
        57 |                 u8 wait_type_inner
        58 |                 u8 lock_type
        60 |                 int cpu
        64 |                 unsigned long ip
        24 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        24 |               u8[16] __padding
        40 |               struct lockdep_map dep_map
        40 |                 struct lock_class_key * key
        44 |                 struct lock_class *[2] class_cache
        52 |                 const char * name
        56 |                 u8 wait_type_outer
        57 |                 u8 wait_type_inner
        58 |                 u8 lock_type
        60 |                 int cpu
        64 |                 unsigned long ip
        68 |         struct list_head head
        68 |           struct list_head * next
        72 |           struct list_head * prev
        76 |       struct fasync_struct * fasync_list
        80 |       unsigned long flags
        84 |       struct callback_head rcu
        84 |         struct callback_head * next
        88 |         void (*)(struct callback_head *) func
        96 |   struct inode vfs_inode
        96 |     umode_t i_mode
        98 |     unsigned short i_opflags
       100 |     kuid_t i_uid
       100 |       uid_t val
       104 |     kgid_t i_gid
       104 |       gid_t val
       108 |     unsigned int i_flags
       112 |     struct posix_acl * i_acl
       116 |     struct posix_acl * i_default_acl
       120 |     const struct inode_operations * i_op
       124 |     struct super_block * i_sb
       128 |     struct address_space * i_mapping
       132 |     unsigned long i_ino
       136 |     union inode::(anonymous at ../include/linux/fs.h:661:2) 
       136 |       const unsigned int i_nlink
       136 |       unsigned int __i_nlink
       140 |     dev_t i_rdev
       144 |     loff_t i_size
       152 |     time64_t i_atime_sec
       160 |     time64_t i_mtime_sec
       168 |     time64_t i_ctime_sec
       176 |     u32 i_atime_nsec
       180 |     u32 i_mtime_nsec
       184 |     u32 i_ctime_nsec
       188 |     u32 i_generation
       192 |     struct spinlock i_lock
       192 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       192 |         struct raw_spinlock rlock
       192 |           arch_spinlock_t raw_lock
       192 |             volatile unsigned int slock
       196 |           unsigned int magic
       200 |           unsigned int owner_cpu
       204 |           void * owner
       208 |           struct lockdep_map dep_map
       208 |             struct lock_class_key * key
       212 |             struct lock_class *[2] class_cache
       220 |             const char * name
       224 |             u8 wait_type_outer
       225 |             u8 wait_type_inner
       226 |             u8 lock_type
       228 |             int cpu
       232 |             unsigned long ip
       192 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       192 |           u8[16] __padding
       208 |           struct lockdep_map dep_map
       208 |             struct lock_class_key * key
       212 |             struct lock_class *[2] class_cache
       220 |             const char * name
       224 |             u8 wait_type_outer
       225 |             u8 wait_type_inner
       226 |             u8 lock_type
       228 |             int cpu
       232 |             unsigned long ip
       236 |     unsigned short i_bytes
       238 |     u8 i_blkbits
       239 |     enum rw_hint i_write_hint
       240 |     blkcnt_t i_blocks
       248 |     unsigned long i_state
       252 |     struct rw_semaphore i_rwsem
       252 |       atomic_t count
       252 |         int counter
       256 |       atomic_t owner
       256 |         int counter
       260 |       struct raw_spinlock wait_lock
       260 |         arch_spinlock_t raw_lock
       260 |           volatile unsigned int slock
       264 |         unsigned int magic
       268 |         unsigned int owner_cpu
       272 |         void * owner
       276 |         struct lockdep_map dep_map
       276 |           struct lock_class_key * key
       280 |           struct lock_class *[2] class_cache
       288 |           const char * name
       292 |           u8 wait_type_outer
       293 |           u8 wait_type_inner
       294 |           u8 lock_type
       296 |           int cpu
       300 |           unsigned long ip
       304 |       struct list_head wait_list
       304 |         struct list_head * next
       308 |         struct list_head * prev
       312 |       void * magic
       316 |       struct lockdep_map dep_map
       316 |         struct lock_class_key * key
       320 |         struct lock_class *[2] class_cache
       328 |         const char * name
       332 |         u8 wait_type_outer
       333 |         u8 wait_type_inner
       334 |         u8 lock_type
       336 |         int cpu
       340 |         unsigned long ip
       344 |     unsigned long dirtied_when
       348 |     unsigned long dirtied_time_when
       352 |     struct hlist_node i_hash
       352 |       struct hlist_node * next
       356 |       struct hlist_node ** pprev
       360 |     struct list_head i_io_list
       360 |       struct list_head * next
       364 |       struct list_head * prev
       368 |     struct list_head i_lru
       368 |       struct list_head * next
       372 |       struct list_head * prev
       376 |     struct list_head i_sb_list
       376 |       struct list_head * next
       380 |       struct list_head * prev
       384 |     struct list_head i_wb_list
       384 |       struct list_head * next
       388 |       struct list_head * prev
       392 |     union inode::(anonymous at ../include/linux/fs.h:704:2) 
       392 |       struct hlist_head i_dentry
       392 |         struct hlist_node * first
       392 |       struct callback_head i_rcu
       392 |         struct callback_head * next
       396 |         void (*)(struct callback_head *) func
       400 |     atomic64_t i_version
       400 |       s64 counter
       408 |     atomic64_t i_sequence
       408 |       s64 counter
       416 |     atomic_t i_count
       416 |       int counter
       420 |     atomic_t i_dio_count
       420 |       int counter
       424 |     atomic_t i_writecount
       424 |       int counter
       428 |     atomic_t i_readcount
       428 |       int counter
       432 |     union inode::(anonymous at ../include/linux/fs.h:716:2) 
       432 |       const struct file_operations * i_fop
       432 |       void (*)(struct inode *) free_inode
       436 |     struct file_lock_context * i_flctx
       440 |     struct address_space i_data
       440 |       struct inode * host
       444 |       struct xarray i_pages
       444 |         struct spinlock xa_lock
       444 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       444 |             struct raw_spinlock rlock
       444 |               arch_spinlock_t raw_lock
       444 |                 volatile unsigned int slock
       448 |               unsigned int magic
       452 |               unsigned int owner_cpu
       456 |               void * owner
       460 |               struct lockdep_map dep_map
       460 |                 struct lock_class_key * key
       464 |                 struct lock_class *[2] class_cache
       472 |                 const char * name
       476 |                 u8 wait_type_outer
       477 |                 u8 wait_type_inner
       478 |                 u8 lock_type
       480 |                 int cpu
       484 |                 unsigned long ip
       444 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       444 |               u8[16] __padding
       460 |               struct lockdep_map dep_map
       460 |                 struct lock_class_key * key
       464 |                 struct lock_class *[2] class_cache
       472 |                 const char * name
       476 |                 u8 wait_type_outer
       477 |                 u8 wait_type_inner
       478 |                 u8 lock_type
       480 |                 int cpu
       484 |                 unsigned long ip
       488 |         gfp_t xa_flags
       492 |         void * xa_head
       496 |       struct rw_semaphore invalidate_lock
       496 |         atomic_t count
       496 |           int counter
       500 |         atomic_t owner
       500 |           int counter
       504 |         struct raw_spinlock wait_lock
       504 |           arch_spinlock_t raw_lock
       504 |             volatile unsigned int slock
       508 |           unsigned int magic
       512 |           unsigned int owner_cpu
       516 |           void * owner
       520 |           struct lockdep_map dep_map
       520 |             struct lock_class_key * key
       524 |             struct lock_class *[2] class_cache
       532 |             const char * name
       536 |             u8 wait_type_outer
       537 |             u8 wait_type_inner
       538 |             u8 lock_type
       540 |             int cpu
       544 |             unsigned long ip
       548 |         struct list_head wait_list
       548 |           struct list_head * next
       552 |           struct list_head * prev
       556 |         void * magic
       560 |         struct lockdep_map dep_map
       560 |           struct lock_class_key * key
       564 |           struct lock_class *[2] class_cache
       572 |           const char * name
       576 |           u8 wait_type_outer
       577 |           u8 wait_type_inner
       578 |           u8 lock_type
       580 |           int cpu
       584 |           unsigned long ip
       588 |       gfp_t gfp_mask
       592 |       atomic_t i_mmap_writable
       592 |         int counter
       596 |       struct rb_root_cached i_mmap
       596 |         struct rb_root rb_root
       596 |           struct rb_node * rb_node
       600 |         struct rb_node * rb_leftmost
       604 |       unsigned long nrpages
       608 |       unsigned long writeback_index
       612 |       const struct address_space_operations * a_ops
       616 |       unsigned long flags
       620 |       errseq_t wb_err
       624 |       struct spinlock i_private_lock
       624 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       624 |           struct raw_spinlock rlock
       624 |             arch_spinlock_t raw_lock
       624 |               volatile unsigned int slock
       628 |             unsigned int magic
       632 |             unsigned int owner_cpu
       636 |             void * owner
       640 |             struct lockdep_map dep_map
       640 |               struct lock_class_key * key
       644 |               struct lock_class *[2] class_cache
       652 |               const char * name
       656 |               u8 wait_type_outer
       657 |               u8 wait_type_inner
       658 |               u8 lock_type
       660 |               int cpu
       664 |               unsigned long ip
       624 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       624 |             u8[16] __padding
       640 |             struct lockdep_map dep_map
       640 |               struct lock_class_key * key
       644 |               struct lock_class *[2] class_cache
       652 |               const char * name
       656 |               u8 wait_type_outer
       657 |               u8 wait_type_inner
       658 |               u8 lock_type
       660 |               int cpu
       664 |               unsigned long ip
       668 |       struct list_head i_private_list
       668 |         struct list_head * next
       672 |         struct list_head * prev
       676 |       struct rw_semaphore i_mmap_rwsem
       676 |         atomic_t count
       676 |           int counter
       680 |         atomic_t owner
       680 |           int counter
       684 |         struct raw_spinlock wait_lock
       684 |           arch_spinlock_t raw_lock
       684 |             volatile unsigned int slock
       688 |           unsigned int magic
       692 |           unsigned int owner_cpu
       696 |           void * owner
       700 |           struct lockdep_map dep_map
       700 |             struct lock_class_key * key
       704 |             struct lock_class *[2] class_cache
       712 |             const char * name
       716 |             u8 wait_type_outer
       717 |             u8 wait_type_inner
       718 |             u8 lock_type
       720 |             int cpu
       724 |             unsigned long ip
       728 |         struct list_head wait_list
       728 |           struct list_head * next
       732 |           struct list_head * prev
       736 |         void * magic
       740 |         struct lockdep_map dep_map
       740 |           struct lock_class_key * key
       744 |           struct lock_class *[2] class_cache
       752 |           const char * name
       756 |           u8 wait_type_outer
       757 |           u8 wait_type_inner
       758 |           u8 lock_type
       760 |           int cpu
       764 |           unsigned long ip
       768 |       void * i_private_data
       772 |     struct list_head i_devices
       772 |       struct list_head * next
       776 |       struct list_head * prev
       780 |     union inode::(anonymous at ../include/linux/fs.h:723:2) 
       780 |       struct pipe_inode_info * i_pipe
       780 |       struct cdev * i_cdev
       780 |       char * i_link
       780 |       unsigned int i_dir_seq
       784 |     __u32 i_fsnotify_mask
       788 |     struct fsnotify_mark_connector * i_fsnotify_marks
       792 |     struct fscrypt_inode_info * i_crypt_info
       796 |     void * i_private
           | [sizeof=800, align=8]

*** Dumping AST Record Layout
         0 | struct sock_skb_cb
         0 |   u32 dropcount
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct sockaddr_in
         0 |   __kernel_sa_family_t sin_family
         2 |   __be16 sin_port
         4 |   struct in_addr sin_addr
         4 |     __be32 s_addr
         8 |   unsigned char[8] __pad
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union mptcp_subflow_addrs::(anonymous at ../include/uapi/linux/mptcp.h:84:2)
         0 |   __kernel_sa_family_t sa_family
         0 |   struct sockaddr sa_local
         0 |     sa_family_t sa_family
         2 |     union sockaddr::(anonymous at ../include/linux/socket.h:37:2) 
         2 |       char[14] sa_data_min
         2 |       struct sockaddr::(anonymous at ../include/linux/socket.h:39:3) 
         2 |         struct sockaddr::(unnamed at ../include/linux/socket.h:39:3) __empty_sa_data
         2 |         char[] sa_data
         0 |   struct sockaddr_in sin_local
         0 |     __kernel_sa_family_t sin_family
         2 |     __be16 sin_port
         4 |     struct in_addr sin_addr
         4 |       __be32 s_addr
         8 |     unsigned char[8] __pad
         0 |   struct sockaddr_in6 sin6_local
         0 |     unsigned short sin6_family
         2 |     __be16 sin6_port
         4 |     __be32 sin6_flowinfo
         8 |     struct in6_addr sin6_addr
         8 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         8 |         __u8[16] u6_addr8
         8 |         __be16[8] u6_addr16
         8 |         __be32[4] u6_addr32
        24 |     __u32 sin6_scope_id
         0 |   struct __kernel_sockaddr_storage ss_local
         0 |     union __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:17:2) 
         0 |       struct __kernel_sockaddr_storage::(anonymous at ../include/uapi/linux/socket.h:18:3) 
         0 |         __kernel_sa_family_t ss_family
         2 |         char[126] __data
         0 |       void * __align
           | [sizeof=128, align=4]

*** Dumping AST Record Layout
         0 | struct request_sock
         0 |   struct sock_common __req_common
         0 |     union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |       __addrpair skc_addrpair
         0 |       struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |         __be32 skc_daddr
         4 |         __be32 skc_rcv_saddr
         8 |     union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |       unsigned int skc_hash
         8 |       __u16[2] skc_u16hashes
        12 |     union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |       __portpair skc_portpair
        12 |       struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |         __be16 skc_dport
        14 |         __u16 skc_num
        16 |     unsigned short skc_family
        18 |     volatile unsigned char skc_state
    19:0-3 |     unsigned char skc_reuse
    19:4-4 |     unsigned char skc_reuseport
    19:5-5 |     unsigned char skc_ipv6only
    19:6-6 |     unsigned char skc_net_refcnt
        20 |     int skc_bound_dev_if
        24 |     union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |       struct hlist_node skc_bind_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        24 |       struct hlist_node skc_portaddr_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        32 |     struct proto * skc_prot
        36 |     possible_net_t skc_net
        36 |       struct net * net
        40 |     struct in6_addr skc_v6_daddr
        40 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |         __u8[16] u6_addr8
        40 |         __be16[8] u6_addr16
        40 |         __be32[4] u6_addr32
        56 |     struct in6_addr skc_v6_rcv_saddr
        56 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |         __u8[16] u6_addr8
        56 |         __be16[8] u6_addr16
        56 |         __be32[4] u6_addr32
        72 |     atomic64_t skc_cookie
        72 |       s64 counter
        80 |     union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |       unsigned long skc_flags
        80 |       struct sock * skc_listener
        80 |       struct inet_timewait_death_row * skc_tw_dr
        84 |     int[0] skc_dontcopy_begin
        84 |     union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |       struct hlist_node skc_node
        84 |         struct hlist_node * next
        88 |         struct hlist_node ** pprev
        84 |       struct hlist_nulls_node skc_nulls_node
        84 |         struct hlist_nulls_node * next
        88 |         struct hlist_nulls_node ** pprev
        92 |     unsigned short skc_tx_queue_mapping
        94 |     unsigned short skc_rx_queue_mapping
        96 |     union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |       int skc_incoming_cpu
        96 |       u32 skc_rcv_wnd
        96 |       u32 skc_tw_rcv_nxt
       100 |     struct refcount_struct skc_refcnt
       100 |       atomic_t refs
       100 |         int counter
       104 |     int[0] skc_dontcopy_end
       104 |     union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |       u32 skc_rxhash
       104 |       u32 skc_window_clamp
       104 |       u32 skc_tw_snd_nxt
       112 |   struct request_sock * dl_next
       116 |   u16 mss
       118 |   u8 num_retrans
   119:0-0 |   u8 syncookie
   119:1-7 |   u8 num_timeout
       120 |   u32 ts_recent
       124 |   struct timer_list rsk_timer
       124 |     struct hlist_node entry
       124 |       struct hlist_node * next
       128 |       struct hlist_node ** pprev
       132 |     unsigned long expires
       136 |     void (*)(struct timer_list *) function
       140 |     u32 flags
       144 |     struct lockdep_map lockdep_map
       144 |       struct lock_class_key * key
       148 |       struct lock_class *[2] class_cache
       156 |       const char * name
       160 |       u8 wait_type_outer
       161 |       u8 wait_type_inner
       162 |       u8 lock_type
       164 |       int cpu
       168 |       unsigned long ip
       172 |   const struct request_sock_ops * rsk_ops
       176 |   struct sock * sk
       180 |   struct saved_syn * saved_syn
       184 |   u32 secid
       188 |   u32 peer_secid
       192 |   u32 timeout
           | [sizeof=200, align=8]

*** Dumping AST Record Layout
         0 | struct ip_options
         0 |   __be32 faddr
         4 |   __be32 nexthop
         8 |   unsigned char optlen
         9 |   unsigned char srr
        10 |   unsigned char rr
        11 |   unsigned char ts
    12:0-0 |   unsigned char is_strictroute
    12:1-1 |   unsigned char srr_is_hit
    12:2-2 |   unsigned char is_changed
    12:3-3 |   unsigned char rr_needaddr
    12:4-4 |   unsigned char ts_needtime
    12:5-5 |   unsigned char ts_needaddr
        13 |   unsigned char router_alert
        14 |   unsigned char cipso
        15 |   unsigned char __pad2
        16 |   unsigned char[] __data
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ip_options_rcu
         0 |   struct callback_head rcu
         0 |     struct callback_head * next
         4 |     void (*)(struct callback_head *) func
         8 |   struct ip_options opt
         8 |     __be32 faddr
        12 |     __be32 nexthop
        16 |     unsigned char optlen
        17 |     unsigned char srr
        18 |     unsigned char rr
        19 |     unsigned char ts
    20:0-0 |     unsigned char is_strictroute
    20:1-1 |     unsigned char srr_is_hit
    20:2-2 |     unsigned char is_changed
    20:3-3 |     unsigned char rr_needaddr
    20:4-4 |     unsigned char ts_needtime
    20:5-5 |     unsigned char ts_needaddr
        21 |     unsigned char router_alert
        22 |     unsigned char cipso
        23 |     unsigned char __pad2
        24 |     unsigned char[] __data
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct inet_cork
         0 |   unsigned int flags
         4 |   __be32 addr
         8 |   struct ip_options * opt
        12 |   unsigned int fragsize
        16 |   int length
        20 |   struct dst_entry * dst
        24 |   u8 tx_flags
        25 |   __u8 ttl
        26 |   __s16 tos
        28 |   char priority
        30 |   __u16 gso_size
        32 |   u64 transmit_time
        40 |   u32 mark
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct inet_cork_full
         0 |   struct inet_cork base
         0 |     unsigned int flags
         4 |     __be32 addr
         8 |     struct ip_options * opt
        12 |     unsigned int fragsize
        16 |     int length
        20 |     struct dst_entry * dst
        24 |     u8 tx_flags
        25 |     __u8 ttl
        26 |     __s16 tos
        28 |     char priority
        30 |     __u16 gso_size
        32 |     u64 transmit_time
        40 |     u32 mark
        48 |   struct flowi fl
        48 |     union flowi::(unnamed at ../include/net/flow.h:155:2) u
        48 |       struct flowi_common __fl_common
        48 |         int flowic_oif
        52 |         int flowic_iif
        56 |         int flowic_l3mdev
        60 |         __u32 flowic_mark
        64 |         __u8 flowic_tos
        65 |         __u8 flowic_scope
        66 |         __u8 flowic_proto
        67 |         __u8 flowic_flags
        68 |         __u32 flowic_secid
        72 |         kuid_t flowic_uid
        72 |           uid_t val
        76 |         __u32 flowic_multipath_hash
        80 |         struct flowi_tunnel flowic_tun_key
        80 |           __be64 tun_id
        48 |       struct flowi4 ip4
        48 |         struct flowi_common __fl_common
        48 |           int flowic_oif
        52 |           int flowic_iif
        56 |           int flowic_l3mdev
        60 |           __u32 flowic_mark
        64 |           __u8 flowic_tos
        65 |           __u8 flowic_scope
        66 |           __u8 flowic_proto
        67 |           __u8 flowic_flags
        68 |           __u32 flowic_secid
        72 |           kuid_t flowic_uid
        72 |             uid_t val
        76 |           __u32 flowic_multipath_hash
        80 |           struct flowi_tunnel flowic_tun_key
        80 |             __be64 tun_id
        88 |         __be32 saddr
        92 |         __be32 daddr
        96 |         union flowi_uli uli
        96 |           struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
        96 |             __be16 dport
        98 |             __be16 sport
        96 |           struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
        96 |             __u8 type
        97 |             __u8 code
        96 |           __be32 gre_key
        96 |           struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
        96 |             __u8 type
        48 |       struct flowi6 ip6
        48 |         struct flowi_common __fl_common
        48 |           int flowic_oif
        52 |           int flowic_iif
        56 |           int flowic_l3mdev
        60 |           __u32 flowic_mark
        64 |           __u8 flowic_tos
        65 |           __u8 flowic_scope
        66 |           __u8 flowic_proto
        67 |           __u8 flowic_flags
        68 |           __u32 flowic_secid
        72 |           kuid_t flowic_uid
        72 |             uid_t val
        76 |           __u32 flowic_multipath_hash
        80 |           struct flowi_tunnel flowic_tun_key
        80 |             __be64 tun_id
        88 |         struct in6_addr daddr
        88 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        88 |             __u8[16] u6_addr8
        88 |             __be16[8] u6_addr16
        88 |             __be32[4] u6_addr32
       104 |         struct in6_addr saddr
       104 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
       104 |             __u8[16] u6_addr8
       104 |             __be16[8] u6_addr16
       104 |             __be32[4] u6_addr32
       120 |         __be32 flowlabel
       124 |         union flowi_uli uli
       124 |           struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
       124 |             __be16 dport
       126 |             __be16 sport
       124 |           struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
       124 |             __u8 type
       125 |             __u8 code
       124 |           __be32 gre_key
       124 |           struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
       124 |             __u8 type
       128 |         __u32 mp_hash
           | [sizeof=136, align=8]

*** Dumping AST Record Layout
         0 | struct inet_sock
         0 |   struct sock sk
         0 |     struct sock_common __sk_common
         0 |       union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |         __addrpair skc_addrpair
         0 |         struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |           __be32 skc_daddr
         4 |           __be32 skc_rcv_saddr
         8 |       union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |         unsigned int skc_hash
         8 |         __u16[2] skc_u16hashes
        12 |       union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |         __portpair skc_portpair
        12 |         struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |           __be16 skc_dport
        14 |           __u16 skc_num
        16 |       unsigned short skc_family
        18 |       volatile unsigned char skc_state
    19:0-3 |       unsigned char skc_reuse
    19:4-4 |       unsigned char skc_reuseport
    19:5-5 |       unsigned char skc_ipv6only
    19:6-6 |       unsigned char skc_net_refcnt
        20 |       int skc_bound_dev_if
        24 |       union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |         struct hlist_node skc_bind_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        24 |         struct hlist_node skc_portaddr_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        32 |       struct proto * skc_prot
        36 |       possible_net_t skc_net
        36 |         struct net * net
        40 |       struct in6_addr skc_v6_daddr
        40 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |           __u8[16] u6_addr8
        40 |           __be16[8] u6_addr16
        40 |           __be32[4] u6_addr32
        56 |       struct in6_addr skc_v6_rcv_saddr
        56 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |           __u8[16] u6_addr8
        56 |           __be16[8] u6_addr16
        56 |           __be32[4] u6_addr32
        72 |       atomic64_t skc_cookie
        72 |         s64 counter
        80 |       union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |         unsigned long skc_flags
        80 |         struct sock * skc_listener
        80 |         struct inet_timewait_death_row * skc_tw_dr
        84 |       int[0] skc_dontcopy_begin
        84 |       union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |         struct hlist_node skc_node
        84 |           struct hlist_node * next
        88 |           struct hlist_node ** pprev
        84 |         struct hlist_nulls_node skc_nulls_node
        84 |           struct hlist_nulls_node * next
        88 |           struct hlist_nulls_node ** pprev
        92 |       unsigned short skc_tx_queue_mapping
        94 |       unsigned short skc_rx_queue_mapping
        96 |       union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |         int skc_incoming_cpu
        96 |         u32 skc_rcv_wnd
        96 |         u32 skc_tw_rcv_nxt
       100 |       struct refcount_struct skc_refcnt
       100 |         atomic_t refs
       100 |           int counter
       104 |       int[0] skc_dontcopy_end
       104 |       union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |         u32 skc_rxhash
       104 |         u32 skc_window_clamp
       104 |         u32 skc_tw_snd_nxt
       112 |     __u8[0] __cacheline_group_begin__sock_write_rx
       112 |     atomic_t sk_drops
       112 |       int counter
       116 |     __s32 sk_peek_off
       120 |     struct sk_buff_head sk_error_queue
       120 |       union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |         struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |           struct sk_buff * next
       124 |           struct sk_buff * prev
       120 |         struct sk_buff_list list
       120 |           struct sk_buff * next
       124 |           struct sk_buff * prev
       128 |       __u32 qlen
       132 |       struct spinlock lock
       132 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       132 |           struct raw_spinlock rlock
       132 |             arch_spinlock_t raw_lock
       132 |               volatile unsigned int slock
       136 |             unsigned int magic
       140 |             unsigned int owner_cpu
       144 |             void * owner
       148 |             struct lockdep_map dep_map
       148 |               struct lock_class_key * key
       152 |               struct lock_class *[2] class_cache
       160 |               const char * name
       164 |               u8 wait_type_outer
       165 |               u8 wait_type_inner
       166 |               u8 lock_type
       168 |               int cpu
       172 |               unsigned long ip
       132 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       132 |             u8[16] __padding
       148 |             struct lockdep_map dep_map
       148 |               struct lock_class_key * key
       152 |               struct lock_class *[2] class_cache
       160 |               const char * name
       164 |               u8 wait_type_outer
       165 |               u8 wait_type_inner
       166 |               u8 lock_type
       168 |               int cpu
       172 |               unsigned long ip
       176 |     struct sk_buff_head sk_receive_queue
       176 |       union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |         struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |           struct sk_buff * next
       180 |           struct sk_buff * prev
       176 |         struct sk_buff_list list
       176 |           struct sk_buff * next
       180 |           struct sk_buff * prev
       184 |       __u32 qlen
       188 |       struct spinlock lock
       188 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       188 |           struct raw_spinlock rlock
       188 |             arch_spinlock_t raw_lock
       188 |               volatile unsigned int slock
       192 |             unsigned int magic
       196 |             unsigned int owner_cpu
       200 |             void * owner
       204 |             struct lockdep_map dep_map
       204 |               struct lock_class_key * key
       208 |               struct lock_class *[2] class_cache
       216 |               const char * name
       220 |               u8 wait_type_outer
       221 |               u8 wait_type_inner
       222 |               u8 lock_type
       224 |               int cpu
       228 |               unsigned long ip
       188 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       188 |             u8[16] __padding
       204 |             struct lockdep_map dep_map
       204 |               struct lock_class_key * key
       208 |               struct lock_class *[2] class_cache
       216 |               const char * name
       220 |               u8 wait_type_outer
       221 |               u8 wait_type_inner
       222 |               u8 lock_type
       224 |               int cpu
       228 |               unsigned long ip
       232 |     struct sock::(unnamed at ../include/net/sock.h:395:2) sk_backlog
       232 |       atomic_t rmem_alloc
       232 |         int counter
       236 |       int len
       240 |       struct sk_buff * head
       244 |       struct sk_buff * tail
       248 |     __u8[0] __cacheline_group_end__sock_write_rx
       248 |     __u8[0] __cacheline_group_begin__sock_read_rx
       248 |     struct dst_entry * sk_rx_dst
       252 |     int sk_rx_dst_ifindex
       256 |     u32 sk_rx_dst_cookie
       260 |     unsigned int sk_ll_usec
       264 |     unsigned int sk_napi_id
       268 |     u16 sk_busy_poll_budget
       270 |     u8 sk_prefer_busy_poll
       271 |     u8 sk_userlocks
       272 |     int sk_rcvbuf
       276 |     struct sk_filter * sk_filter
       280 |     union sock::(anonymous at ../include/net/sock.h:421:2) 
       280 |       struct socket_wq * sk_wq
       280 |       struct socket_wq * sk_wq_raw
       284 |     void (*)(struct sock *) sk_data_ready
       288 |     long sk_rcvtimeo
       292 |     int sk_rcvlowat
       296 |     __u8[0] __cacheline_group_end__sock_read_rx
       296 |     __u8[0] __cacheline_group_begin__sock_read_rxtx
       296 |     int sk_err
       300 |     struct socket * sk_socket
       304 |     struct mem_cgroup * sk_memcg
       308 |     struct xfrm_policy *[2] sk_policy
       316 |     __u8[0] __cacheline_group_end__sock_read_rxtx
       316 |     __u8[0] __cacheline_group_begin__sock_write_rxtx
       316 |     socket_lock_t sk_lock
       316 |       struct spinlock slock
       316 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       316 |           struct raw_spinlock rlock
       316 |             arch_spinlock_t raw_lock
       316 |               volatile unsigned int slock
       320 |             unsigned int magic
       324 |             unsigned int owner_cpu
       328 |             void * owner
       332 |             struct lockdep_map dep_map
       332 |               struct lock_class_key * key
       336 |               struct lock_class *[2] class_cache
       344 |               const char * name
       348 |               u8 wait_type_outer
       349 |               u8 wait_type_inner
       350 |               u8 lock_type
       352 |               int cpu
       356 |               unsigned long ip
       316 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       316 |             u8[16] __padding
       332 |             struct lockdep_map dep_map
       332 |               struct lock_class_key * key
       336 |               struct lock_class *[2] class_cache
       344 |               const char * name
       348 |               u8 wait_type_outer
       349 |               u8 wait_type_inner
       350 |               u8 lock_type
       352 |               int cpu
       356 |               unsigned long ip
       360 |       int owned
       364 |       struct wait_queue_head wq
       364 |         struct spinlock lock
       364 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       364 |             struct raw_spinlock rlock
       364 |               arch_spinlock_t raw_lock
       364 |                 volatile unsigned int slock
       368 |               unsigned int magic
       372 |               unsigned int owner_cpu
       376 |               void * owner
       380 |               struct lockdep_map dep_map
       380 |                 struct lock_class_key * key
       384 |                 struct lock_class *[2] class_cache
       392 |                 const char * name
       396 |                 u8 wait_type_outer
       397 |                 u8 wait_type_inner
       398 |                 u8 lock_type
       400 |                 int cpu
       404 |                 unsigned long ip
       364 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       364 |               u8[16] __padding
       380 |               struct lockdep_map dep_map
       380 |                 struct lock_class_key * key
       384 |                 struct lock_class *[2] class_cache
       392 |                 const char * name
       396 |                 u8 wait_type_outer
       397 |                 u8 wait_type_inner
       398 |                 u8 lock_type
       400 |                 int cpu
       404 |                 unsigned long ip
       408 |         struct list_head head
       408 |           struct list_head * next
       412 |           struct list_head * prev
       416 |       struct lockdep_map dep_map
       416 |         struct lock_class_key * key
       420 |         struct lock_class *[2] class_cache
       428 |         const char * name
       432 |         u8 wait_type_outer
       433 |         u8 wait_type_inner
       434 |         u8 lock_type
       436 |         int cpu
       440 |         unsigned long ip
       444 |     u32 sk_reserved_mem
       448 |     int sk_forward_alloc
       452 |     u32 sk_tsflags
       456 |     __u8[0] __cacheline_group_end__sock_write_rxtx
       456 |     __u8[0] __cacheline_group_begin__sock_write_tx
       456 |     int sk_write_pending
       460 |     atomic_t sk_omem_alloc
       460 |       int counter
       464 |     int sk_sndbuf
       468 |     int sk_wmem_queued
       472 |     struct refcount_struct sk_wmem_alloc
       472 |       atomic_t refs
       472 |         int counter
       476 |     unsigned long sk_tsq_flags
       480 |     union sock::(anonymous at ../include/net/sock.h:457:2) 
       480 |       struct sk_buff * sk_send_head
       480 |       struct rb_root tcp_rtx_queue
       480 |         struct rb_node * rb_node
       484 |     struct sk_buff_head sk_write_queue
       484 |       union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |         struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |           struct sk_buff * next
       488 |           struct sk_buff * prev
       484 |         struct sk_buff_list list
       484 |           struct sk_buff * next
       488 |           struct sk_buff * prev
       492 |       __u32 qlen
       496 |       struct spinlock lock
       496 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       496 |           struct raw_spinlock rlock
       496 |             arch_spinlock_t raw_lock
       496 |               volatile unsigned int slock
       500 |             unsigned int magic
       504 |             unsigned int owner_cpu
       508 |             void * owner
       512 |             struct lockdep_map dep_map
       512 |               struct lock_class_key * key
       516 |               struct lock_class *[2] class_cache
       524 |               const char * name
       528 |               u8 wait_type_outer
       529 |               u8 wait_type_inner
       530 |               u8 lock_type
       532 |               int cpu
       536 |               unsigned long ip
       496 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       496 |             u8[16] __padding
       512 |             struct lockdep_map dep_map
       512 |               struct lock_class_key * key
       516 |               struct lock_class *[2] class_cache
       524 |               const char * name
       528 |               u8 wait_type_outer
       529 |               u8 wait_type_inner
       530 |               u8 lock_type
       532 |               int cpu
       536 |               unsigned long ip
       540 |     u32 sk_dst_pending_confirm
       544 |     u32 sk_pacing_status
       548 |     struct page_frag sk_frag
       548 |       struct page * page
       552 |       __u16 offset
       554 |       __u16 size
       556 |     struct timer_list sk_timer
       556 |       struct hlist_node entry
       556 |         struct hlist_node * next
       560 |         struct hlist_node ** pprev
       564 |       unsigned long expires
       568 |       void (*)(struct timer_list *) function
       572 |       u32 flags
       576 |       struct lockdep_map lockdep_map
       576 |         struct lock_class_key * key
       580 |         struct lock_class *[2] class_cache
       588 |         const char * name
       592 |         u8 wait_type_outer
       593 |         u8 wait_type_inner
       594 |         u8 lock_type
       596 |         int cpu
       600 |         unsigned long ip
       604 |     unsigned long sk_pacing_rate
       608 |     atomic_t sk_zckey
       608 |       int counter
       612 |     atomic_t sk_tskey
       612 |       int counter
       616 |     __u8[0] __cacheline_group_end__sock_write_tx
       616 |     __u8[0] __cacheline_group_begin__sock_read_tx
       616 |     unsigned long sk_max_pacing_rate
       620 |     long sk_sndtimeo
       624 |     u32 sk_priority
       628 |     u32 sk_mark
       632 |     struct dst_entry * sk_dst_cache
       640 |     netdev_features_t sk_route_caps
       648 |     struct sk_buff *(*)(struct sock *, struct net_device *, struct sk_buff *) sk_validate_xmit_skb
       652 |     u16 sk_gso_type
       654 |     u16 sk_gso_max_segs
       656 |     unsigned int sk_gso_max_size
       660 |     gfp_t sk_allocation
       664 |     u32 sk_txhash
       668 |     u8 sk_pacing_shift
       669 |     bool sk_use_task_frag
       670 |     __u8[0] __cacheline_group_end__sock_read_tx
   670:0-0 |     u8 sk_gso_disabled
   670:1-1 |     u8 sk_kern_sock
   670:2-2 |     u8 sk_no_check_tx
   670:3-3 |     u8 sk_no_check_rx
       671 |     u8 sk_shutdown
       672 |     u16 sk_type
       674 |     u16 sk_protocol
       676 |     unsigned long sk_lingertime
       680 |     struct proto * sk_prot_creator
       684 |     rwlock_t sk_callback_lock
       684 |       arch_rwlock_t raw_lock
       684 |       unsigned int magic
       688 |       unsigned int owner_cpu
       692 |       void * owner
       696 |       struct lockdep_map dep_map
       696 |         struct lock_class_key * key
       700 |         struct lock_class *[2] class_cache
       708 |         const char * name
       712 |         u8 wait_type_outer
       713 |         u8 wait_type_inner
       714 |         u8 lock_type
       716 |         int cpu
       720 |         unsigned long ip
       724 |     int sk_err_soft
       728 |     u32 sk_ack_backlog
       732 |     u32 sk_max_ack_backlog
       736 |     kuid_t sk_uid
       736 |       uid_t val
       740 |     struct spinlock sk_peer_lock
       740 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       740 |         struct raw_spinlock rlock
       740 |           arch_spinlock_t raw_lock
       740 |             volatile unsigned int slock
       744 |           unsigned int magic
       748 |           unsigned int owner_cpu
       752 |           void * owner
       756 |           struct lockdep_map dep_map
       756 |             struct lock_class_key * key
       760 |             struct lock_class *[2] class_cache
       768 |             const char * name
       772 |             u8 wait_type_outer
       773 |             u8 wait_type_inner
       774 |             u8 lock_type
       776 |             int cpu
       780 |             unsigned long ip
       740 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       740 |           u8[16] __padding
       756 |           struct lockdep_map dep_map
       756 |             struct lock_class_key * key
       760 |             struct lock_class *[2] class_cache
       768 |             const char * name
       772 |             u8 wait_type_outer
       773 |             u8 wait_type_inner
       774 |             u8 lock_type
       776 |             int cpu
       780 |             unsigned long ip
       784 |     int sk_bind_phc
       788 |     struct pid * sk_peer_pid
       792 |     const struct cred * sk_peer_cred
       800 |     ktime_t sk_stamp
       808 |     seqlock_t sk_stamp_seq
       808 |       struct seqcount_spinlock seqcount
       808 |         struct seqcount seqcount
       808 |           unsigned int sequence
       812 |           struct lockdep_map dep_map
       812 |             struct lock_class_key * key
       816 |             struct lock_class *[2] class_cache
       824 |             const char * name
       828 |             u8 wait_type_outer
       829 |             u8 wait_type_inner
       830 |             u8 lock_type
       832 |             int cpu
       836 |             unsigned long ip
       840 |         spinlock_t * lock
       844 |       struct spinlock lock
       844 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |           struct raw_spinlock rlock
       844 |             arch_spinlock_t raw_lock
       844 |               volatile unsigned int slock
       848 |             unsigned int magic
       852 |             unsigned int owner_cpu
       856 |             void * owner
       860 |             struct lockdep_map dep_map
       860 |               struct lock_class_key * key
       864 |               struct lock_class *[2] class_cache
       872 |               const char * name
       876 |               u8 wait_type_outer
       877 |               u8 wait_type_inner
       878 |               u8 lock_type
       880 |               int cpu
       884 |               unsigned long ip
       844 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |             u8[16] __padding
       860 |             struct lockdep_map dep_map
       860 |               struct lock_class_key * key
       864 |               struct lock_class *[2] class_cache
       872 |               const char * name
       876 |               u8 wait_type_outer
       877 |               u8 wait_type_inner
       878 |               u8 lock_type
       880 |               int cpu
       884 |               unsigned long ip
       888 |     int sk_disconnects
       892 |     u8 sk_txrehash
       893 |     u8 sk_clockid
   894:0-0 |     u8 sk_txtime_deadline_mode
   894:1-1 |     u8 sk_txtime_report_errors
   894:2-7 |     u8 sk_txtime_unused
       896 |     void * sk_user_data
       900 |     struct sock_cgroup_data sk_cgrp_data
       900 |     void (*)(struct sock *) sk_state_change
       904 |     void (*)(struct sock *) sk_write_space
       908 |     void (*)(struct sock *) sk_error_report
       912 |     int (*)(struct sock *, struct sk_buff *) sk_backlog_rcv
       916 |     void (*)(struct sock *) sk_destruct
       920 |     struct sock_reuseport * sk_reuseport_cb
       924 |     struct bpf_local_storage * sk_bpf_storage
       928 |     struct callback_head sk_rcu
       928 |       struct callback_head * next
       932 |       void (*)(struct callback_head *) func
       936 |     netns_tracker ns_tracker
       944 |   struct ipv6_pinfo * pinet6
       948 |   unsigned long inet_flags
       952 |   __be32 inet_saddr
       956 |   __s16 uc_ttl
       958 |   __be16 inet_sport
       960 |   struct ip_options_rcu * inet_opt
       964 |   atomic_t inet_id
       964 |     int counter
       968 |   __u8 tos
       969 |   __u8 min_ttl
       970 |   __u8 mc_ttl
       971 |   __u8 pmtudisc
       972 |   __u8 rcv_tos
       973 |   __u8 convert_csum
       976 |   int uc_index
       980 |   int mc_index
       984 |   __be32 mc_addr
       988 |   u32 local_port_range
       992 |   struct ip_mc_socklist * mc_list
      1000 |   struct inet_cork_full cork
      1000 |     struct inet_cork base
      1000 |       unsigned int flags
      1004 |       __be32 addr
      1008 |       struct ip_options * opt
      1012 |       unsigned int fragsize
      1016 |       int length
      1020 |       struct dst_entry * dst
      1024 |       u8 tx_flags
      1025 |       __u8 ttl
      1026 |       __s16 tos
      1028 |       char priority
      1030 |       __u16 gso_size
      1032 |       u64 transmit_time
      1040 |       u32 mark
      1048 |     struct flowi fl
      1048 |       union flowi::(unnamed at ../include/net/flow.h:155:2) u
      1048 |         struct flowi_common __fl_common
      1048 |           int flowic_oif
      1052 |           int flowic_iif
      1056 |           int flowic_l3mdev
      1060 |           __u32 flowic_mark
      1064 |           __u8 flowic_tos
      1065 |           __u8 flowic_scope
      1066 |           __u8 flowic_proto
      1067 |           __u8 flowic_flags
      1068 |           __u32 flowic_secid
      1072 |           kuid_t flowic_uid
      1072 |             uid_t val
      1076 |           __u32 flowic_multipath_hash
      1080 |           struct flowi_tunnel flowic_tun_key
      1080 |             __be64 tun_id
      1048 |         struct flowi4 ip4
      1048 |           struct flowi_common __fl_common
      1048 |             int flowic_oif
      1052 |             int flowic_iif
      1056 |             int flowic_l3mdev
      1060 |             __u32 flowic_mark
      1064 |             __u8 flowic_tos
      1065 |             __u8 flowic_scope
      1066 |             __u8 flowic_proto
      1067 |             __u8 flowic_flags
      1068 |             __u32 flowic_secid
      1072 |             kuid_t flowic_uid
      1072 |               uid_t val
      1076 |             __u32 flowic_multipath_hash
      1080 |             struct flowi_tunnel flowic_tun_key
      1080 |               __be64 tun_id
      1088 |           __be32 saddr
      1092 |           __be32 daddr
      1096 |           union flowi_uli uli
      1096 |             struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1096 |               __be16 dport
      1098 |               __be16 sport
      1096 |             struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1096 |               __u8 type
      1097 |               __u8 code
      1096 |             __be32 gre_key
      1096 |             struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1096 |               __u8 type
      1048 |         struct flowi6 ip6
      1048 |           struct flowi_common __fl_common
      1048 |             int flowic_oif
      1052 |             int flowic_iif
      1056 |             int flowic_l3mdev
      1060 |             __u32 flowic_mark
      1064 |             __u8 flowic_tos
      1065 |             __u8 flowic_scope
      1066 |             __u8 flowic_proto
      1067 |             __u8 flowic_flags
      1068 |             __u32 flowic_secid
      1072 |             kuid_t flowic_uid
      1072 |               uid_t val
      1076 |             __u32 flowic_multipath_hash
      1080 |             struct flowi_tunnel flowic_tun_key
      1080 |               __be64 tun_id
      1088 |           struct in6_addr daddr
      1088 |             union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1088 |               __u8[16] u6_addr8
      1088 |               __be16[8] u6_addr16
      1088 |               __be32[4] u6_addr32
      1104 |           struct in6_addr saddr
      1104 |             union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1104 |               __u8[16] u6_addr8
      1104 |               __be16[8] u6_addr16
      1104 |               __be32[4] u6_addr32
      1120 |           __be32 flowlabel
      1124 |           union flowi_uli uli
      1124 |             struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1124 |               __be16 dport
      1126 |               __be16 sport
      1124 |             struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1124 |               __u8 type
      1125 |               __u8 code
      1124 |             __be32 gre_key
      1124 |             struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1124 |               __u8 type
      1128 |           __u32 mp_hash
           | [sizeof=1136, align=8]

*** Dumping AST Record Layout
         0 | struct fastopen_queue
         0 |   struct request_sock * rskq_rst_head
         4 |   struct request_sock * rskq_rst_tail
         8 |   struct spinlock lock
         8 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |       struct raw_spinlock rlock
         8 |         arch_spinlock_t raw_lock
         8 |           volatile unsigned int slock
        12 |         unsigned int magic
        16 |         unsigned int owner_cpu
        20 |         void * owner
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
         8 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |         u8[16] __padding
        24 |         struct lockdep_map dep_map
        24 |           struct lock_class_key * key
        28 |           struct lock_class *[2] class_cache
        36 |           const char * name
        40 |           u8 wait_type_outer
        41 |           u8 wait_type_inner
        42 |           u8 lock_type
        44 |           int cpu
        48 |           unsigned long ip
        52 |   int qlen
        56 |   int max_qlen
        60 |   struct tcp_fastopen_context * ctx
           | [sizeof=64, align=4]

*** Dumping AST Record Layout
         0 | struct request_sock_queue
         0 |   struct spinlock rskq_lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   u8 rskq_defer_accept
        48 |   u32 synflood_warned
        52 |   atomic_t qlen
        52 |     int counter
        56 |   atomic_t young
        56 |     int counter
        60 |   struct request_sock * rskq_accept_head
        64 |   struct request_sock * rskq_accept_tail
        68 |   struct fastopen_queue fastopenq
        68 |     struct request_sock * rskq_rst_head
        72 |     struct request_sock * rskq_rst_tail
        76 |     struct spinlock lock
        76 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        76 |         struct raw_spinlock rlock
        76 |           arch_spinlock_t raw_lock
        76 |             volatile unsigned int slock
        80 |           unsigned int magic
        84 |           unsigned int owner_cpu
        88 |           void * owner
        92 |           struct lockdep_map dep_map
        92 |             struct lock_class_key * key
        96 |             struct lock_class *[2] class_cache
       104 |             const char * name
       108 |             u8 wait_type_outer
       109 |             u8 wait_type_inner
       110 |             u8 lock_type
       112 |             int cpu
       116 |             unsigned long ip
        76 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        76 |           u8[16] __padding
        92 |           struct lockdep_map dep_map
        92 |             struct lock_class_key * key
        96 |             struct lock_class *[2] class_cache
       104 |             const char * name
       108 |             u8 wait_type_outer
       109 |             u8 wait_type_inner
       110 |             u8 lock_type
       112 |             int cpu
       116 |             unsigned long ip
       120 |     int qlen
       124 |     int max_qlen
       128 |     struct tcp_fastopen_context * ctx
           | [sizeof=132, align=4]

*** Dumping AST Record Layout
         0 | struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:111:2)
         0 |   __u8 pending
         1 |   __u8 quick
         2 |   __u8 pingpong
         3 |   __u8 retry
     4:0-7 |   __u32 ato
    5:0-19 |   __u32 lrcv_flowlabel
     7:4-7 |   __u32 unused
         8 |   unsigned long timeout
        12 |   __u32 lrcvtime
        16 |   __u16 last_seg_size
        18 |   __u16 rcv_mss
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:125:2)
         0 |   int search_high
         4 |   int search_low
    8:0-30 |   u32 probe_size
    11:7-7 |   u32 enabled
        12 |   u32 probe_timestamp
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct inet_connection_sock
         0 |   struct inet_sock icsk_inet
         0 |     struct sock sk
         0 |       struct sock_common __sk_common
         0 |         union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |           __addrpair skc_addrpair
         0 |           struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |             __be32 skc_daddr
         4 |             __be32 skc_rcv_saddr
         8 |         union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |           unsigned int skc_hash
         8 |           __u16[2] skc_u16hashes
        12 |         union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |           __portpair skc_portpair
        12 |           struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |             __be16 skc_dport
        14 |             __u16 skc_num
        16 |         unsigned short skc_family
        18 |         volatile unsigned char skc_state
    19:0-3 |         unsigned char skc_reuse
    19:4-4 |         unsigned char skc_reuseport
    19:5-5 |         unsigned char skc_ipv6only
    19:6-6 |         unsigned char skc_net_refcnt
        20 |         int skc_bound_dev_if
        24 |         union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |           struct hlist_node skc_bind_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        24 |           struct hlist_node skc_portaddr_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        32 |         struct proto * skc_prot
        36 |         possible_net_t skc_net
        36 |           struct net * net
        40 |         struct in6_addr skc_v6_daddr
        40 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |             __u8[16] u6_addr8
        40 |             __be16[8] u6_addr16
        40 |             __be32[4] u6_addr32
        56 |         struct in6_addr skc_v6_rcv_saddr
        56 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |             __u8[16] u6_addr8
        56 |             __be16[8] u6_addr16
        56 |             __be32[4] u6_addr32
        72 |         atomic64_t skc_cookie
        72 |           s64 counter
        80 |         union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |           unsigned long skc_flags
        80 |           struct sock * skc_listener
        80 |           struct inet_timewait_death_row * skc_tw_dr
        84 |         int[0] skc_dontcopy_begin
        84 |         union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |           struct hlist_node skc_node
        84 |             struct hlist_node * next
        88 |             struct hlist_node ** pprev
        84 |           struct hlist_nulls_node skc_nulls_node
        84 |             struct hlist_nulls_node * next
        88 |             struct hlist_nulls_node ** pprev
        92 |         unsigned short skc_tx_queue_mapping
        94 |         unsigned short skc_rx_queue_mapping
        96 |         union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |           int skc_incoming_cpu
        96 |           u32 skc_rcv_wnd
        96 |           u32 skc_tw_rcv_nxt
       100 |         struct refcount_struct skc_refcnt
       100 |           atomic_t refs
       100 |             int counter
       104 |         int[0] skc_dontcopy_end
       104 |         union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |           u32 skc_rxhash
       104 |           u32 skc_window_clamp
       104 |           u32 skc_tw_snd_nxt
       112 |       __u8[0] __cacheline_group_begin__sock_write_rx
       112 |       atomic_t sk_drops
       112 |         int counter
       116 |       __s32 sk_peek_off
       120 |       struct sk_buff_head sk_error_queue
       120 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |             struct sk_buff * next
       124 |             struct sk_buff * prev
       120 |           struct sk_buff_list list
       120 |             struct sk_buff * next
       124 |             struct sk_buff * prev
       128 |         __u32 qlen
       132 |         struct spinlock lock
       132 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       132 |             struct raw_spinlock rlock
       132 |               arch_spinlock_t raw_lock
       132 |                 volatile unsigned int slock
       136 |               unsigned int magic
       140 |               unsigned int owner_cpu
       144 |               void * owner
       148 |               struct lockdep_map dep_map
       148 |                 struct lock_class_key * key
       152 |                 struct lock_class *[2] class_cache
       160 |                 const char * name
       164 |                 u8 wait_type_outer
       165 |                 u8 wait_type_inner
       166 |                 u8 lock_type
       168 |                 int cpu
       172 |                 unsigned long ip
       132 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       132 |               u8[16] __padding
       148 |               struct lockdep_map dep_map
       148 |                 struct lock_class_key * key
       152 |                 struct lock_class *[2] class_cache
       160 |                 const char * name
       164 |                 u8 wait_type_outer
       165 |                 u8 wait_type_inner
       166 |                 u8 lock_type
       168 |                 int cpu
       172 |                 unsigned long ip
       176 |       struct sk_buff_head sk_receive_queue
       176 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |             struct sk_buff * next
       180 |             struct sk_buff * prev
       176 |           struct sk_buff_list list
       176 |             struct sk_buff * next
       180 |             struct sk_buff * prev
       184 |         __u32 qlen
       188 |         struct spinlock lock
       188 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       188 |             struct raw_spinlock rlock
       188 |               arch_spinlock_t raw_lock
       188 |                 volatile unsigned int slock
       192 |               unsigned int magic
       196 |               unsigned int owner_cpu
       200 |               void * owner
       204 |               struct lockdep_map dep_map
       204 |                 struct lock_class_key * key
       208 |                 struct lock_class *[2] class_cache
       216 |                 const char * name
       220 |                 u8 wait_type_outer
       221 |                 u8 wait_type_inner
       222 |                 u8 lock_type
       224 |                 int cpu
       228 |                 unsigned long ip
       188 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       188 |               u8[16] __padding
       204 |               struct lockdep_map dep_map
       204 |                 struct lock_class_key * key
       208 |                 struct lock_class *[2] class_cache
       216 |                 const char * name
       220 |                 u8 wait_type_outer
       221 |                 u8 wait_type_inner
       222 |                 u8 lock_type
       224 |                 int cpu
       228 |                 unsigned long ip
       232 |       struct sock::(unnamed at ../include/net/sock.h:395:2) sk_backlog
       232 |         atomic_t rmem_alloc
       232 |           int counter
       236 |         int len
       240 |         struct sk_buff * head
       244 |         struct sk_buff * tail
       248 |       __u8[0] __cacheline_group_end__sock_write_rx
       248 |       __u8[0] __cacheline_group_begin__sock_read_rx
       248 |       struct dst_entry * sk_rx_dst
       252 |       int sk_rx_dst_ifindex
       256 |       u32 sk_rx_dst_cookie
       260 |       unsigned int sk_ll_usec
       264 |       unsigned int sk_napi_id
       268 |       u16 sk_busy_poll_budget
       270 |       u8 sk_prefer_busy_poll
       271 |       u8 sk_userlocks
       272 |       int sk_rcvbuf
       276 |       struct sk_filter * sk_filter
       280 |       union sock::(anonymous at ../include/net/sock.h:421:2) 
       280 |         struct socket_wq * sk_wq
       280 |         struct socket_wq * sk_wq_raw
       284 |       void (*)(struct sock *) sk_data_ready
       288 |       long sk_rcvtimeo
       292 |       int sk_rcvlowat
       296 |       __u8[0] __cacheline_group_end__sock_read_rx
       296 |       __u8[0] __cacheline_group_begin__sock_read_rxtx
       296 |       int sk_err
       300 |       struct socket * sk_socket
       304 |       struct mem_cgroup * sk_memcg
       308 |       struct xfrm_policy *[2] sk_policy
       316 |       __u8[0] __cacheline_group_end__sock_read_rxtx
       316 |       __u8[0] __cacheline_group_begin__sock_write_rxtx
       316 |       socket_lock_t sk_lock
       316 |         struct spinlock slock
       316 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       316 |             struct raw_spinlock rlock
       316 |               arch_spinlock_t raw_lock
       316 |                 volatile unsigned int slock
       320 |               unsigned int magic
       324 |               unsigned int owner_cpu
       328 |               void * owner
       332 |               struct lockdep_map dep_map
       332 |                 struct lock_class_key * key
       336 |                 struct lock_class *[2] class_cache
       344 |                 const char * name
       348 |                 u8 wait_type_outer
       349 |                 u8 wait_type_inner
       350 |                 u8 lock_type
       352 |                 int cpu
       356 |                 unsigned long ip
       316 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       316 |               u8[16] __padding
       332 |               struct lockdep_map dep_map
       332 |                 struct lock_class_key * key
       336 |                 struct lock_class *[2] class_cache
       344 |                 const char * name
       348 |                 u8 wait_type_outer
       349 |                 u8 wait_type_inner
       350 |                 u8 lock_type
       352 |                 int cpu
       356 |                 unsigned long ip
       360 |         int owned
       364 |         struct wait_queue_head wq
       364 |           struct spinlock lock
       364 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       364 |               struct raw_spinlock rlock
       364 |                 arch_spinlock_t raw_lock
       364 |                   volatile unsigned int slock
       368 |                 unsigned int magic
       372 |                 unsigned int owner_cpu
       376 |                 void * owner
       380 |                 struct lockdep_map dep_map
       380 |                   struct lock_class_key * key
       384 |                   struct lock_class *[2] class_cache
       392 |                   const char * name
       396 |                   u8 wait_type_outer
       397 |                   u8 wait_type_inner
       398 |                   u8 lock_type
       400 |                   int cpu
       404 |                   unsigned long ip
       364 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       364 |                 u8[16] __padding
       380 |                 struct lockdep_map dep_map
       380 |                   struct lock_class_key * key
       384 |                   struct lock_class *[2] class_cache
       392 |                   const char * name
       396 |                   u8 wait_type_outer
       397 |                   u8 wait_type_inner
       398 |                   u8 lock_type
       400 |                   int cpu
       404 |                   unsigned long ip
       408 |           struct list_head head
       408 |             struct list_head * next
       412 |             struct list_head * prev
       416 |         struct lockdep_map dep_map
       416 |           struct lock_class_key * key
       420 |           struct lock_class *[2] class_cache
       428 |           const char * name
       432 |           u8 wait_type_outer
       433 |           u8 wait_type_inner
       434 |           u8 lock_type
       436 |           int cpu
       440 |           unsigned long ip
       444 |       u32 sk_reserved_mem
       448 |       int sk_forward_alloc
       452 |       u32 sk_tsflags
       456 |       __u8[0] __cacheline_group_end__sock_write_rxtx
       456 |       __u8[0] __cacheline_group_begin__sock_write_tx
       456 |       int sk_write_pending
       460 |       atomic_t sk_omem_alloc
       460 |         int counter
       464 |       int sk_sndbuf
       468 |       int sk_wmem_queued
       472 |       struct refcount_struct sk_wmem_alloc
       472 |         atomic_t refs
       472 |           int counter
       476 |       unsigned long sk_tsq_flags
       480 |       union sock::(anonymous at ../include/net/sock.h:457:2) 
       480 |         struct sk_buff * sk_send_head
       480 |         struct rb_root tcp_rtx_queue
       480 |           struct rb_node * rb_node
       484 |       struct sk_buff_head sk_write_queue
       484 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |             struct sk_buff * next
       488 |             struct sk_buff * prev
       484 |           struct sk_buff_list list
       484 |             struct sk_buff * next
       488 |             struct sk_buff * prev
       492 |         __u32 qlen
       496 |         struct spinlock lock
       496 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       496 |             struct raw_spinlock rlock
       496 |               arch_spinlock_t raw_lock
       496 |                 volatile unsigned int slock
       500 |               unsigned int magic
       504 |               unsigned int owner_cpu
       508 |               void * owner
       512 |               struct lockdep_map dep_map
       512 |                 struct lock_class_key * key
       516 |                 struct lock_class *[2] class_cache
       524 |                 const char * name
       528 |                 u8 wait_type_outer
       529 |                 u8 wait_type_inner
       530 |                 u8 lock_type
       532 |                 int cpu
       536 |                 unsigned long ip
       496 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       496 |               u8[16] __padding
       512 |               struct lockdep_map dep_map
       512 |                 struct lock_class_key * key
       516 |                 struct lock_class *[2] class_cache
       524 |                 const char * name
       528 |                 u8 wait_type_outer
       529 |                 u8 wait_type_inner
       530 |                 u8 lock_type
       532 |                 int cpu
       536 |                 unsigned long ip
       540 |       u32 sk_dst_pending_confirm
       544 |       u32 sk_pacing_status
       548 |       struct page_frag sk_frag
       548 |         struct page * page
       552 |         __u16 offset
       554 |         __u16 size
       556 |       struct timer_list sk_timer
       556 |         struct hlist_node entry
       556 |           struct hlist_node * next
       560 |           struct hlist_node ** pprev
       564 |         unsigned long expires
       568 |         void (*)(struct timer_list *) function
       572 |         u32 flags
       576 |         struct lockdep_map lockdep_map
       576 |           struct lock_class_key * key
       580 |           struct lock_class *[2] class_cache
       588 |           const char * name
       592 |           u8 wait_type_outer
       593 |           u8 wait_type_inner
       594 |           u8 lock_type
       596 |           int cpu
       600 |           unsigned long ip
       604 |       unsigned long sk_pacing_rate
       608 |       atomic_t sk_zckey
       608 |         int counter
       612 |       atomic_t sk_tskey
       612 |         int counter
       616 |       __u8[0] __cacheline_group_end__sock_write_tx
       616 |       __u8[0] __cacheline_group_begin__sock_read_tx
       616 |       unsigned long sk_max_pacing_rate
       620 |       long sk_sndtimeo
       624 |       u32 sk_priority
       628 |       u32 sk_mark
       632 |       struct dst_entry * sk_dst_cache
       640 |       netdev_features_t sk_route_caps
       648 |       struct sk_buff *(*)(struct sock *, struct net_device *, struct sk_buff *) sk_validate_xmit_skb
       652 |       u16 sk_gso_type
       654 |       u16 sk_gso_max_segs
       656 |       unsigned int sk_gso_max_size
       660 |       gfp_t sk_allocation
       664 |       u32 sk_txhash
       668 |       u8 sk_pacing_shift
       669 |       bool sk_use_task_frag
       670 |       __u8[0] __cacheline_group_end__sock_read_tx
   670:0-0 |       u8 sk_gso_disabled
   670:1-1 |       u8 sk_kern_sock
   670:2-2 |       u8 sk_no_check_tx
   670:3-3 |       u8 sk_no_check_rx
       671 |       u8 sk_shutdown
       672 |       u16 sk_type
       674 |       u16 sk_protocol
       676 |       unsigned long sk_lingertime
       680 |       struct proto * sk_prot_creator
       684 |       rwlock_t sk_callback_lock
       684 |         arch_rwlock_t raw_lock
       684 |         unsigned int magic
       688 |         unsigned int owner_cpu
       692 |         void * owner
       696 |         struct lockdep_map dep_map
       696 |           struct lock_class_key * key
       700 |           struct lock_class *[2] class_cache
       708 |           const char * name
       712 |           u8 wait_type_outer
       713 |           u8 wait_type_inner
       714 |           u8 lock_type
       716 |           int cpu
       720 |           unsigned long ip
       724 |       int sk_err_soft
       728 |       u32 sk_ack_backlog
       732 |       u32 sk_max_ack_backlog
       736 |       kuid_t sk_uid
       736 |         uid_t val
       740 |       struct spinlock sk_peer_lock
       740 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       740 |           struct raw_spinlock rlock
       740 |             arch_spinlock_t raw_lock
       740 |               volatile unsigned int slock
       744 |             unsigned int magic
       748 |             unsigned int owner_cpu
       752 |             void * owner
       756 |             struct lockdep_map dep_map
       756 |               struct lock_class_key * key
       760 |               struct lock_class *[2] class_cache
       768 |               const char * name
       772 |               u8 wait_type_outer
       773 |               u8 wait_type_inner
       774 |               u8 lock_type
       776 |               int cpu
       780 |               unsigned long ip
       740 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       740 |             u8[16] __padding
       756 |             struct lockdep_map dep_map
       756 |               struct lock_class_key * key
       760 |               struct lock_class *[2] class_cache
       768 |               const char * name
       772 |               u8 wait_type_outer
       773 |               u8 wait_type_inner
       774 |               u8 lock_type
       776 |               int cpu
       780 |               unsigned long ip
       784 |       int sk_bind_phc
       788 |       struct pid * sk_peer_pid
       792 |       const struct cred * sk_peer_cred
       800 |       ktime_t sk_stamp
       808 |       seqlock_t sk_stamp_seq
       808 |         struct seqcount_spinlock seqcount
       808 |           struct seqcount seqcount
       808 |             unsigned int sequence
       812 |             struct lockdep_map dep_map
       812 |               struct lock_class_key * key
       816 |               struct lock_class *[2] class_cache
       824 |               const char * name
       828 |               u8 wait_type_outer
       829 |               u8 wait_type_inner
       830 |               u8 lock_type
       832 |               int cpu
       836 |               unsigned long ip
       840 |           spinlock_t * lock
       844 |         struct spinlock lock
       844 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |             struct raw_spinlock rlock
       844 |               arch_spinlock_t raw_lock
       844 |                 volatile unsigned int slock
       848 |               unsigned int magic
       852 |               unsigned int owner_cpu
       856 |               void * owner
       860 |               struct lockdep_map dep_map
       860 |                 struct lock_class_key * key
       864 |                 struct lock_class *[2] class_cache
       872 |                 const char * name
       876 |                 u8 wait_type_outer
       877 |                 u8 wait_type_inner
       878 |                 u8 lock_type
       880 |                 int cpu
       884 |                 unsigned long ip
       844 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |               u8[16] __padding
       860 |               struct lockdep_map dep_map
       860 |                 struct lock_class_key * key
       864 |                 struct lock_class *[2] class_cache
       872 |                 const char * name
       876 |                 u8 wait_type_outer
       877 |                 u8 wait_type_inner
       878 |                 u8 lock_type
       880 |                 int cpu
       884 |                 unsigned long ip
       888 |       int sk_disconnects
       892 |       u8 sk_txrehash
       893 |       u8 sk_clockid
   894:0-0 |       u8 sk_txtime_deadline_mode
   894:1-1 |       u8 sk_txtime_report_errors
   894:2-7 |       u8 sk_txtime_unused
       896 |       void * sk_user_data
       900 |       struct sock_cgroup_data sk_cgrp_data
       900 |       void (*)(struct sock *) sk_state_change
       904 |       void (*)(struct sock *) sk_write_space
       908 |       void (*)(struct sock *) sk_error_report
       912 |       int (*)(struct sock *, struct sk_buff *) sk_backlog_rcv
       916 |       void (*)(struct sock *) sk_destruct
       920 |       struct sock_reuseport * sk_reuseport_cb
       924 |       struct bpf_local_storage * sk_bpf_storage
       928 |       struct callback_head sk_rcu
       928 |         struct callback_head * next
       932 |         void (*)(struct callback_head *) func
       936 |       netns_tracker ns_tracker
       944 |     struct ipv6_pinfo * pinet6
       948 |     unsigned long inet_flags
       952 |     __be32 inet_saddr
       956 |     __s16 uc_ttl
       958 |     __be16 inet_sport
       960 |     struct ip_options_rcu * inet_opt
       964 |     atomic_t inet_id
       964 |       int counter
       968 |     __u8 tos
       969 |     __u8 min_ttl
       970 |     __u8 mc_ttl
       971 |     __u8 pmtudisc
       972 |     __u8 rcv_tos
       973 |     __u8 convert_csum
       976 |     int uc_index
       980 |     int mc_index
       984 |     __be32 mc_addr
       988 |     u32 local_port_range
       992 |     struct ip_mc_socklist * mc_list
      1000 |     struct inet_cork_full cork
      1000 |       struct inet_cork base
      1000 |         unsigned int flags
      1004 |         __be32 addr
      1008 |         struct ip_options * opt
      1012 |         unsigned int fragsize
      1016 |         int length
      1020 |         struct dst_entry * dst
      1024 |         u8 tx_flags
      1025 |         __u8 ttl
      1026 |         __s16 tos
      1028 |         char priority
      1030 |         __u16 gso_size
      1032 |         u64 transmit_time
      1040 |         u32 mark
      1048 |       struct flowi fl
      1048 |         union flowi::(unnamed at ../include/net/flow.h:155:2) u
      1048 |           struct flowi_common __fl_common
      1048 |             int flowic_oif
      1052 |             int flowic_iif
      1056 |             int flowic_l3mdev
      1060 |             __u32 flowic_mark
      1064 |             __u8 flowic_tos
      1065 |             __u8 flowic_scope
      1066 |             __u8 flowic_proto
      1067 |             __u8 flowic_flags
      1068 |             __u32 flowic_secid
      1072 |             kuid_t flowic_uid
      1072 |               uid_t val
      1076 |             __u32 flowic_multipath_hash
      1080 |             struct flowi_tunnel flowic_tun_key
      1080 |               __be64 tun_id
      1048 |           struct flowi4 ip4
      1048 |             struct flowi_common __fl_common
      1048 |               int flowic_oif
      1052 |               int flowic_iif
      1056 |               int flowic_l3mdev
      1060 |               __u32 flowic_mark
      1064 |               __u8 flowic_tos
      1065 |               __u8 flowic_scope
      1066 |               __u8 flowic_proto
      1067 |               __u8 flowic_flags
      1068 |               __u32 flowic_secid
      1072 |               kuid_t flowic_uid
      1072 |                 uid_t val
      1076 |               __u32 flowic_multipath_hash
      1080 |               struct flowi_tunnel flowic_tun_key
      1080 |                 __be64 tun_id
      1088 |             __be32 saddr
      1092 |             __be32 daddr
      1096 |             union flowi_uli uli
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1096 |                 __be16 dport
      1098 |                 __be16 sport
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1096 |                 __u8 type
      1097 |                 __u8 code
      1096 |               __be32 gre_key
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1096 |                 __u8 type
      1048 |           struct flowi6 ip6
      1048 |             struct flowi_common __fl_common
      1048 |               int flowic_oif
      1052 |               int flowic_iif
      1056 |               int flowic_l3mdev
      1060 |               __u32 flowic_mark
      1064 |               __u8 flowic_tos
      1065 |               __u8 flowic_scope
      1066 |               __u8 flowic_proto
      1067 |               __u8 flowic_flags
      1068 |               __u32 flowic_secid
      1072 |               kuid_t flowic_uid
      1072 |                 uid_t val
      1076 |               __u32 flowic_multipath_hash
      1080 |               struct flowi_tunnel flowic_tun_key
      1080 |                 __be64 tun_id
      1088 |             struct in6_addr daddr
      1088 |               union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1088 |                 __u8[16] u6_addr8
      1088 |                 __be16[8] u6_addr16
      1088 |                 __be32[4] u6_addr32
      1104 |             struct in6_addr saddr
      1104 |               union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1104 |                 __u8[16] u6_addr8
      1104 |                 __be16[8] u6_addr16
      1104 |                 __be32[4] u6_addr32
      1120 |             __be32 flowlabel
      1124 |             union flowi_uli uli
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1124 |                 __be16 dport
      1126 |                 __be16 sport
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1124 |                 __u8 type
      1125 |                 __u8 code
      1124 |               __be32 gre_key
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1124 |                 __u8 type
      1128 |             __u32 mp_hash
      1136 |   struct request_sock_queue icsk_accept_queue
      1136 |     struct spinlock rskq_lock
      1136 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1136 |         struct raw_spinlock rlock
      1136 |           arch_spinlock_t raw_lock
      1136 |             volatile unsigned int slock
      1140 |           unsigned int magic
      1144 |           unsigned int owner_cpu
      1148 |           void * owner
      1152 |           struct lockdep_map dep_map
      1152 |             struct lock_class_key * key
      1156 |             struct lock_class *[2] class_cache
      1164 |             const char * name
      1168 |             u8 wait_type_outer
      1169 |             u8 wait_type_inner
      1170 |             u8 lock_type
      1172 |             int cpu
      1176 |             unsigned long ip
      1136 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1136 |           u8[16] __padding
      1152 |           struct lockdep_map dep_map
      1152 |             struct lock_class_key * key
      1156 |             struct lock_class *[2] class_cache
      1164 |             const char * name
      1168 |             u8 wait_type_outer
      1169 |             u8 wait_type_inner
      1170 |             u8 lock_type
      1172 |             int cpu
      1176 |             unsigned long ip
      1180 |     u8 rskq_defer_accept
      1184 |     u32 synflood_warned
      1188 |     atomic_t qlen
      1188 |       int counter
      1192 |     atomic_t young
      1192 |       int counter
      1196 |     struct request_sock * rskq_accept_head
      1200 |     struct request_sock * rskq_accept_tail
      1204 |     struct fastopen_queue fastopenq
      1204 |       struct request_sock * rskq_rst_head
      1208 |       struct request_sock * rskq_rst_tail
      1212 |       struct spinlock lock
      1212 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1212 |           struct raw_spinlock rlock
      1212 |             arch_spinlock_t raw_lock
      1212 |               volatile unsigned int slock
      1216 |             unsigned int magic
      1220 |             unsigned int owner_cpu
      1224 |             void * owner
      1228 |             struct lockdep_map dep_map
      1228 |               struct lock_class_key * key
      1232 |               struct lock_class *[2] class_cache
      1240 |               const char * name
      1244 |               u8 wait_type_outer
      1245 |               u8 wait_type_inner
      1246 |               u8 lock_type
      1248 |               int cpu
      1252 |               unsigned long ip
      1212 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1212 |             u8[16] __padding
      1228 |             struct lockdep_map dep_map
      1228 |               struct lock_class_key * key
      1232 |               struct lock_class *[2] class_cache
      1240 |               const char * name
      1244 |               u8 wait_type_outer
      1245 |               u8 wait_type_inner
      1246 |               u8 lock_type
      1248 |               int cpu
      1252 |               unsigned long ip
      1256 |       int qlen
      1260 |       int max_qlen
      1264 |       struct tcp_fastopen_context * ctx
      1268 |   struct inet_bind_bucket * icsk_bind_hash
      1272 |   struct inet_bind2_bucket * icsk_bind2_hash
      1276 |   unsigned long icsk_timeout
      1280 |   struct timer_list icsk_retransmit_timer
      1280 |     struct hlist_node entry
      1280 |       struct hlist_node * next
      1284 |       struct hlist_node ** pprev
      1288 |     unsigned long expires
      1292 |     void (*)(struct timer_list *) function
      1296 |     u32 flags
      1300 |     struct lockdep_map lockdep_map
      1300 |       struct lock_class_key * key
      1304 |       struct lock_class *[2] class_cache
      1312 |       const char * name
      1316 |       u8 wait_type_outer
      1317 |       u8 wait_type_inner
      1318 |       u8 lock_type
      1320 |       int cpu
      1324 |       unsigned long ip
      1328 |   struct timer_list icsk_delack_timer
      1328 |     struct hlist_node entry
      1328 |       struct hlist_node * next
      1332 |       struct hlist_node ** pprev
      1336 |     unsigned long expires
      1340 |     void (*)(struct timer_list *) function
      1344 |     u32 flags
      1348 |     struct lockdep_map lockdep_map
      1348 |       struct lock_class_key * key
      1352 |       struct lock_class *[2] class_cache
      1360 |       const char * name
      1364 |       u8 wait_type_outer
      1365 |       u8 wait_type_inner
      1366 |       u8 lock_type
      1368 |       int cpu
      1372 |       unsigned long ip
      1376 |   __u32 icsk_rto
      1380 |   __u32 icsk_rto_min
      1384 |   __u32 icsk_delack_max
      1388 |   __u32 icsk_pmtu_cookie
      1392 |   const struct tcp_congestion_ops * icsk_ca_ops
      1396 |   const struct inet_connection_sock_af_ops * icsk_af_ops
      1400 |   const struct tcp_ulp_ops * icsk_ulp_ops
      1404 |   void * icsk_ulp_data
      1408 |   void (*)(struct sock *, u32) icsk_clean_acked
      1412 |   unsigned int (*)(struct sock *, u32) icsk_sync_mss
  1416:0-4 |   __u8 icsk_ca_state
  1416:5-5 |   __u8 icsk_ca_initialized
  1416:6-6 |   __u8 icsk_ca_setsockopt
  1416:7-7 |   __u8 icsk_ca_dst_locked
      1417 |   __u8 icsk_retransmits
      1418 |   __u8 icsk_pending
      1419 |   __u8 icsk_backoff
      1420 |   __u8 icsk_syn_retries
      1421 |   __u8 icsk_probes_out
      1422 |   __u16 icsk_ext_hdr_len
      1424 |   struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:111:2) icsk_ack
      1424 |     __u8 pending
      1425 |     __u8 quick
      1426 |     __u8 pingpong
      1427 |     __u8 retry
  1428:0-7 |     __u32 ato
 1429:0-19 |     __u32 lrcv_flowlabel
  1431:4-7 |     __u32 unused
      1432 |     unsigned long timeout
      1436 |     __u32 lrcvtime
      1440 |     __u16 last_seg_size
      1442 |     __u16 rcv_mss
      1444 |   struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:125:2) icsk_mtup
      1444 |     int search_high
      1448 |     int search_low
 1452:0-30 |     u32 probe_size
  1455:7-7 |     u32 enabled
      1456 |     u32 probe_timestamp
      1460 |   u32 icsk_probes_tstamp
      1464 |   u32 icsk_user_timeout
      1472 |   u64[13] icsk_ca_priv
           | [sizeof=1576, align=8]

*** Dumping AST Record Layout
         0 | struct inet_timewait_sock
         0 |   struct sock_common __tw_common
         0 |     union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |       __addrpair skc_addrpair
         0 |       struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |         __be32 skc_daddr
         4 |         __be32 skc_rcv_saddr
         8 |     union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |       unsigned int skc_hash
         8 |       __u16[2] skc_u16hashes
        12 |     union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |       __portpair skc_portpair
        12 |       struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |         __be16 skc_dport
        14 |         __u16 skc_num
        16 |     unsigned short skc_family
        18 |     volatile unsigned char skc_state
    19:0-3 |     unsigned char skc_reuse
    19:4-4 |     unsigned char skc_reuseport
    19:5-5 |     unsigned char skc_ipv6only
    19:6-6 |     unsigned char skc_net_refcnt
        20 |     int skc_bound_dev_if
        24 |     union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |       struct hlist_node skc_bind_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        24 |       struct hlist_node skc_portaddr_node
        24 |         struct hlist_node * next
        28 |         struct hlist_node ** pprev
        32 |     struct proto * skc_prot
        36 |     possible_net_t skc_net
        36 |       struct net * net
        40 |     struct in6_addr skc_v6_daddr
        40 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |         __u8[16] u6_addr8
        40 |         __be16[8] u6_addr16
        40 |         __be32[4] u6_addr32
        56 |     struct in6_addr skc_v6_rcv_saddr
        56 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |         __u8[16] u6_addr8
        56 |         __be16[8] u6_addr16
        56 |         __be32[4] u6_addr32
        72 |     atomic64_t skc_cookie
        72 |       s64 counter
        80 |     union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |       unsigned long skc_flags
        80 |       struct sock * skc_listener
        80 |       struct inet_timewait_death_row * skc_tw_dr
        84 |     int[0] skc_dontcopy_begin
        84 |     union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |       struct hlist_node skc_node
        84 |         struct hlist_node * next
        88 |         struct hlist_node ** pprev
        84 |       struct hlist_nulls_node skc_nulls_node
        84 |         struct hlist_nulls_node * next
        88 |         struct hlist_nulls_node ** pprev
        92 |     unsigned short skc_tx_queue_mapping
        94 |     unsigned short skc_rx_queue_mapping
        96 |     union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |       int skc_incoming_cpu
        96 |       u32 skc_rcv_wnd
        96 |       u32 skc_tw_rcv_nxt
       100 |     struct refcount_struct skc_refcnt
       100 |       atomic_t refs
       100 |         int counter
       104 |     int[0] skc_dontcopy_end
       104 |     union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |       u32 skc_rxhash
       104 |       u32 skc_window_clamp
       104 |       u32 skc_tw_snd_nxt
       112 |   __u32 tw_mark
       116 |   volatile unsigned char tw_substate
       117 |   unsigned char tw_rcv_wscale
       118 |   __be16 tw_sport
   120:0-0 |   unsigned int tw_transparent
  120:1-20 |   unsigned int tw_flowlabel
   122:5-5 |   unsigned int tw_usec_ts
   122:6-7 |   unsigned int tw_pad
   123:0-7 |   unsigned int tw_tos
       124 |   u32 tw_txhash
       128 |   u32 tw_priority
       132 |   struct timer_list tw_timer
       132 |     struct hlist_node entry
       132 |       struct hlist_node * next
       136 |       struct hlist_node ** pprev
       140 |     unsigned long expires
       144 |     void (*)(struct timer_list *) function
       148 |     u32 flags
       152 |     struct lockdep_map lockdep_map
       152 |       struct lock_class_key * key
       156 |       struct lock_class *[2] class_cache
       164 |       const char * name
       168 |       u8 wait_type_outer
       169 |       u8 wait_type_inner
       170 |       u8 lock_type
       172 |       int cpu
       176 |       unsigned long ip
       180 |   struct inet_bind_bucket * tw_tb
       184 |   struct inet_bind2_bucket * tw_tb2
           | [sizeof=192, align=8]

*** Dumping AST Record Layout
         0 | struct tcphdr
         0 |   __be16 source
         2 |   __be16 dest
         4 |   __be32 seq
         8 |   __be32 ack_seq
    12:0-3 |   __u16 res1
    12:4-7 |   __u16 doff
    13:0-0 |   __u16 fin
    13:1-1 |   __u16 syn
    13:2-2 |   __u16 rst
    13:3-3 |   __u16 psh
    13:4-4 |   __u16 ack
    13:5-5 |   __u16 urg
    13:6-6 |   __u16 ece
    13:7-7 |   __u16 cwr
        14 |   __be16 window
        16 |   __sum16 check
        18 |   __be16 urg_ptr
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct inet_request_sock::(anonymous at ../include/net/inet_sock.h:95:3)
         0 |   struct ipv6_txoptions * ipv6_opt
         4 |   struct sk_buff * pktopts
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union inet_request_sock::(anonymous at ../include/net/inet_sock.h:92:2)
         0 |   struct ip_options_rcu * ireq_opt
         0 |   struct inet_request_sock::(anonymous at ../include/net/inet_sock.h:95:3) 
         0 |     struct ipv6_txoptions * ipv6_opt
         4 |     struct sk_buff * pktopts
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct inet_request_sock
         0 |   struct request_sock req
         0 |     struct sock_common __req_common
         0 |       union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |         __addrpair skc_addrpair
         0 |         struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |           __be32 skc_daddr
         4 |           __be32 skc_rcv_saddr
         8 |       union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |         unsigned int skc_hash
         8 |         __u16[2] skc_u16hashes
        12 |       union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |         __portpair skc_portpair
        12 |         struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |           __be16 skc_dport
        14 |           __u16 skc_num
        16 |       unsigned short skc_family
        18 |       volatile unsigned char skc_state
    19:0-3 |       unsigned char skc_reuse
    19:4-4 |       unsigned char skc_reuseport
    19:5-5 |       unsigned char skc_ipv6only
    19:6-6 |       unsigned char skc_net_refcnt
        20 |       int skc_bound_dev_if
        24 |       union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |         struct hlist_node skc_bind_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        24 |         struct hlist_node skc_portaddr_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        32 |       struct proto * skc_prot
        36 |       possible_net_t skc_net
        36 |         struct net * net
        40 |       struct in6_addr skc_v6_daddr
        40 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |           __u8[16] u6_addr8
        40 |           __be16[8] u6_addr16
        40 |           __be32[4] u6_addr32
        56 |       struct in6_addr skc_v6_rcv_saddr
        56 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |           __u8[16] u6_addr8
        56 |           __be16[8] u6_addr16
        56 |           __be32[4] u6_addr32
        72 |       atomic64_t skc_cookie
        72 |         s64 counter
        80 |       union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |         unsigned long skc_flags
        80 |         struct sock * skc_listener
        80 |         struct inet_timewait_death_row * skc_tw_dr
        84 |       int[0] skc_dontcopy_begin
        84 |       union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |         struct hlist_node skc_node
        84 |           struct hlist_node * next
        88 |           struct hlist_node ** pprev
        84 |         struct hlist_nulls_node skc_nulls_node
        84 |           struct hlist_nulls_node * next
        88 |           struct hlist_nulls_node ** pprev
        92 |       unsigned short skc_tx_queue_mapping
        94 |       unsigned short skc_rx_queue_mapping
        96 |       union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |         int skc_incoming_cpu
        96 |         u32 skc_rcv_wnd
        96 |         u32 skc_tw_rcv_nxt
       100 |       struct refcount_struct skc_refcnt
       100 |         atomic_t refs
       100 |           int counter
       104 |       int[0] skc_dontcopy_end
       104 |       union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |         u32 skc_rxhash
       104 |         u32 skc_window_clamp
       104 |         u32 skc_tw_snd_nxt
       112 |     struct request_sock * dl_next
       116 |     u16 mss
       118 |     u8 num_retrans
   119:0-0 |     u8 syncookie
   119:1-7 |     u8 num_timeout
       120 |     u32 ts_recent
       124 |     struct timer_list rsk_timer
       124 |       struct hlist_node entry
       124 |         struct hlist_node * next
       128 |         struct hlist_node ** pprev
       132 |       unsigned long expires
       136 |       void (*)(struct timer_list *) function
       140 |       u32 flags
       144 |       struct lockdep_map lockdep_map
       144 |         struct lock_class_key * key
       148 |         struct lock_class *[2] class_cache
       156 |         const char * name
       160 |         u8 wait_type_outer
       161 |         u8 wait_type_inner
       162 |         u8 lock_type
       164 |         int cpu
       168 |         unsigned long ip
       172 |     const struct request_sock_ops * rsk_ops
       176 |     struct sock * sk
       180 |     struct saved_syn * saved_syn
       184 |     u32 secid
       188 |     u32 peer_secid
       192 |     u32 timeout
   200:0-3 |   u16 snd_wscale
   200:4-7 |   u16 rcv_wscale
   201:0-0 |   u16 tstamp_ok
   201:1-1 |   u16 sack_ok
   201:2-2 |   u16 wscale_ok
   201:3-3 |   u16 ecn_ok
   201:4-4 |   u16 acked
   201:5-5 |   u16 no_srccheck
   201:6-6 |   u16 smc_ok
       204 |   u32 ir_mark
       208 |   union inet_request_sock::(anonymous at ../include/net/inet_sock.h:92:2) 
       208 |     struct ip_options_rcu * ireq_opt
       208 |     struct inet_request_sock::(anonymous at ../include/net/inet_sock.h:95:3) 
       208 |       struct ipv6_txoptions * ipv6_opt
       212 |       struct sk_buff * pktopts
           | [sizeof=216, align=8]

*** Dumping AST Record Layout
         0 | struct tcp_sack_block
         0 |   u32 start_seq
         4 |   u32 end_seq
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct minmax
         0 |   struct minmax_sample[3] s
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct tcp_options_received
         0 |   int ts_recent_stamp
         4 |   u32 ts_recent
         8 |   u32 rcv_tsval
        12 |   u32 rcv_tsecr
    16:0-0 |   u16 saw_tstamp
    16:1-1 |   u16 tstamp_ok
    16:2-2 |   u16 dsack
    16:3-3 |   u16 wscale_ok
    16:4-6 |   u16 sack_ok
    16:7-7 |   u16 smc_ok
    17:0-3 |   u16 snd_wscale
    17:4-7 |   u16 rcv_wscale
    18:0-0 |   u8 saw_unknown
    18:1-7 |   u8 unused
        19 |   u8 num_sacks
        20 |   u16 user_mss
        22 |   u16 mss_clamp
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct tcp_sock::(unnamed at ../include/linux/tcp.h:332:2)
         0 |   u32 rtt_us
         4 |   u32 seq
         8 |   u64 time
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct tcp_sock::(unnamed at ../include/linux/tcp.h:338:2)
         0 |   u32 space
         4 |   u32 seq
         8 |   u64 time
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct tcp_rack
         0 |   u64 mstamp
         8 |   u32 rtt_us
        12 |   u32 end_seq
        16 |   u32 last_delivered
        20 |   u8 reo_wnd_steps
    21:0-4 |   u8 reo_wnd_persist
    21:5-5 |   u8 dsack_seen
    21:6-6 |   u8 advanced
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct tcp_sock::(unnamed at ../include/linux/tcp.h:468:2)
         0 |   u32 probe_seq_start
         4 |   u32 probe_seq_end
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct tcp_sock
         0 |   struct inet_connection_sock inet_conn
         0 |     struct inet_sock icsk_inet
         0 |       struct sock sk
         0 |         struct sock_common __sk_common
         0 |           union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |             __addrpair skc_addrpair
         0 |             struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |               __be32 skc_daddr
         4 |               __be32 skc_rcv_saddr
         8 |           union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |             unsigned int skc_hash
         8 |             __u16[2] skc_u16hashes
        12 |           union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |             __portpair skc_portpair
        12 |             struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |               __be16 skc_dport
        14 |               __u16 skc_num
        16 |           unsigned short skc_family
        18 |           volatile unsigned char skc_state
    19:0-3 |           unsigned char skc_reuse
    19:4-4 |           unsigned char skc_reuseport
    19:5-5 |           unsigned char skc_ipv6only
    19:6-6 |           unsigned char skc_net_refcnt
        20 |           int skc_bound_dev_if
        24 |           union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |             struct hlist_node skc_bind_node
        24 |               struct hlist_node * next
        28 |               struct hlist_node ** pprev
        24 |             struct hlist_node skc_portaddr_node
        24 |               struct hlist_node * next
        28 |               struct hlist_node ** pprev
        32 |           struct proto * skc_prot
        36 |           possible_net_t skc_net
        36 |             struct net * net
        40 |           struct in6_addr skc_v6_daddr
        40 |             union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |               __u8[16] u6_addr8
        40 |               __be16[8] u6_addr16
        40 |               __be32[4] u6_addr32
        56 |           struct in6_addr skc_v6_rcv_saddr
        56 |             union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |               __u8[16] u6_addr8
        56 |               __be16[8] u6_addr16
        56 |               __be32[4] u6_addr32
        72 |           atomic64_t skc_cookie
        72 |             s64 counter
        80 |           union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |             unsigned long skc_flags
        80 |             struct sock * skc_listener
        80 |             struct inet_timewait_death_row * skc_tw_dr
        84 |           int[0] skc_dontcopy_begin
        84 |           union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |             struct hlist_node skc_node
        84 |               struct hlist_node * next
        88 |               struct hlist_node ** pprev
        84 |             struct hlist_nulls_node skc_nulls_node
        84 |               struct hlist_nulls_node * next
        88 |               struct hlist_nulls_node ** pprev
        92 |           unsigned short skc_tx_queue_mapping
        94 |           unsigned short skc_rx_queue_mapping
        96 |           union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |             int skc_incoming_cpu
        96 |             u32 skc_rcv_wnd
        96 |             u32 skc_tw_rcv_nxt
       100 |           struct refcount_struct skc_refcnt
       100 |             atomic_t refs
       100 |               int counter
       104 |           int[0] skc_dontcopy_end
       104 |           union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |             u32 skc_rxhash
       104 |             u32 skc_window_clamp
       104 |             u32 skc_tw_snd_nxt
       112 |         __u8[0] __cacheline_group_begin__sock_write_rx
       112 |         atomic_t sk_drops
       112 |           int counter
       116 |         __s32 sk_peek_off
       120 |         struct sk_buff_head sk_error_queue
       120 |           union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |             struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |               struct sk_buff * next
       124 |               struct sk_buff * prev
       120 |             struct sk_buff_list list
       120 |               struct sk_buff * next
       124 |               struct sk_buff * prev
       128 |           __u32 qlen
       132 |           struct spinlock lock
       132 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       132 |               struct raw_spinlock rlock
       132 |                 arch_spinlock_t raw_lock
       132 |                   volatile unsigned int slock
       136 |                 unsigned int magic
       140 |                 unsigned int owner_cpu
       144 |                 void * owner
       148 |                 struct lockdep_map dep_map
       148 |                   struct lock_class_key * key
       152 |                   struct lock_class *[2] class_cache
       160 |                   const char * name
       164 |                   u8 wait_type_outer
       165 |                   u8 wait_type_inner
       166 |                   u8 lock_type
       168 |                   int cpu
       172 |                   unsigned long ip
       132 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       132 |                 u8[16] __padding
       148 |                 struct lockdep_map dep_map
       148 |                   struct lock_class_key * key
       152 |                   struct lock_class *[2] class_cache
       160 |                   const char * name
       164 |                   u8 wait_type_outer
       165 |                   u8 wait_type_inner
       166 |                   u8 lock_type
       168 |                   int cpu
       172 |                   unsigned long ip
       176 |         struct sk_buff_head sk_receive_queue
       176 |           union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |             struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |               struct sk_buff * next
       180 |               struct sk_buff * prev
       176 |             struct sk_buff_list list
       176 |               struct sk_buff * next
       180 |               struct sk_buff * prev
       184 |           __u32 qlen
       188 |           struct spinlock lock
       188 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       188 |               struct raw_spinlock rlock
       188 |                 arch_spinlock_t raw_lock
       188 |                   volatile unsigned int slock
       192 |                 unsigned int magic
       196 |                 unsigned int owner_cpu
       200 |                 void * owner
       204 |                 struct lockdep_map dep_map
       204 |                   struct lock_class_key * key
       208 |                   struct lock_class *[2] class_cache
       216 |                   const char * name
       220 |                   u8 wait_type_outer
       221 |                   u8 wait_type_inner
       222 |                   u8 lock_type
       224 |                   int cpu
       228 |                   unsigned long ip
       188 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       188 |                 u8[16] __padding
       204 |                 struct lockdep_map dep_map
       204 |                   struct lock_class_key * key
       208 |                   struct lock_class *[2] class_cache
       216 |                   const char * name
       220 |                   u8 wait_type_outer
       221 |                   u8 wait_type_inner
       222 |                   u8 lock_type
       224 |                   int cpu
       228 |                   unsigned long ip
       232 |         struct sock::(unnamed at ../include/net/sock.h:395:2) sk_backlog
       232 |           atomic_t rmem_alloc
       232 |             int counter
       236 |           int len
       240 |           struct sk_buff * head
       244 |           struct sk_buff * tail
       248 |         __u8[0] __cacheline_group_end__sock_write_rx
       248 |         __u8[0] __cacheline_group_begin__sock_read_rx
       248 |         struct dst_entry * sk_rx_dst
       252 |         int sk_rx_dst_ifindex
       256 |         u32 sk_rx_dst_cookie
       260 |         unsigned int sk_ll_usec
       264 |         unsigned int sk_napi_id
       268 |         u16 sk_busy_poll_budget
       270 |         u8 sk_prefer_busy_poll
       271 |         u8 sk_userlocks
       272 |         int sk_rcvbuf
       276 |         struct sk_filter * sk_filter
       280 |         union sock::(anonymous at ../include/net/sock.h:421:2) 
       280 |           struct socket_wq * sk_wq
       280 |           struct socket_wq * sk_wq_raw
       284 |         void (*)(struct sock *) sk_data_ready
       288 |         long sk_rcvtimeo
       292 |         int sk_rcvlowat
       296 |         __u8[0] __cacheline_group_end__sock_read_rx
       296 |         __u8[0] __cacheline_group_begin__sock_read_rxtx
       296 |         int sk_err
       300 |         struct socket * sk_socket
       304 |         struct mem_cgroup * sk_memcg
       308 |         struct xfrm_policy *[2] sk_policy
       316 |         __u8[0] __cacheline_group_end__sock_read_rxtx
       316 |         __u8[0] __cacheline_group_begin__sock_write_rxtx
       316 |         socket_lock_t sk_lock
       316 |           struct spinlock slock
       316 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       316 |               struct raw_spinlock rlock
       316 |                 arch_spinlock_t raw_lock
       316 |                   volatile unsigned int slock
       320 |                 unsigned int magic
       324 |                 unsigned int owner_cpu
       328 |                 void * owner
       332 |                 struct lockdep_map dep_map
       332 |                   struct lock_class_key * key
       336 |                   struct lock_class *[2] class_cache
       344 |                   const char * name
       348 |                   u8 wait_type_outer
       349 |                   u8 wait_type_inner
       350 |                   u8 lock_type
       352 |                   int cpu
       356 |                   unsigned long ip
       316 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       316 |                 u8[16] __padding
       332 |                 struct lockdep_map dep_map
       332 |                   struct lock_class_key * key
       336 |                   struct lock_class *[2] class_cache
       344 |                   const char * name
       348 |                   u8 wait_type_outer
       349 |                   u8 wait_type_inner
       350 |                   u8 lock_type
       352 |                   int cpu
       356 |                   unsigned long ip
       360 |           int owned
       364 |           struct wait_queue_head wq
       364 |             struct spinlock lock
       364 |               union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       364 |                 struct raw_spinlock rlock
       364 |                   arch_spinlock_t raw_lock
       364 |                     volatile unsigned int slock
       368 |                   unsigned int magic
       372 |                   unsigned int owner_cpu
       376 |                   void * owner
       380 |                   struct lockdep_map dep_map
       380 |                     struct lock_class_key * key
       384 |                     struct lock_class *[2] class_cache
       392 |                     const char * name
       396 |                     u8 wait_type_outer
       397 |                     u8 wait_type_inner
       398 |                     u8 lock_type
       400 |                     int cpu
       404 |                     unsigned long ip
       364 |                 struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       364 |                   u8[16] __padding
       380 |                   struct lockdep_map dep_map
       380 |                     struct lock_class_key * key
       384 |                     struct lock_class *[2] class_cache
       392 |                     const char * name
       396 |                     u8 wait_type_outer
       397 |                     u8 wait_type_inner
       398 |                     u8 lock_type
       400 |                     int cpu
       404 |                     unsigned long ip
       408 |             struct list_head head
       408 |               struct list_head * next
       412 |               struct list_head * prev
       416 |           struct lockdep_map dep_map
       416 |             struct lock_class_key * key
       420 |             struct lock_class *[2] class_cache
       428 |             const char * name
       432 |             u8 wait_type_outer
       433 |             u8 wait_type_inner
       434 |             u8 lock_type
       436 |             int cpu
       440 |             unsigned long ip
       444 |         u32 sk_reserved_mem
       448 |         int sk_forward_alloc
       452 |         u32 sk_tsflags
       456 |         __u8[0] __cacheline_group_end__sock_write_rxtx
       456 |         __u8[0] __cacheline_group_begin__sock_write_tx
       456 |         int sk_write_pending
       460 |         atomic_t sk_omem_alloc
       460 |           int counter
       464 |         int sk_sndbuf
       468 |         int sk_wmem_queued
       472 |         struct refcount_struct sk_wmem_alloc
       472 |           atomic_t refs
       472 |             int counter
       476 |         unsigned long sk_tsq_flags
       480 |         union sock::(anonymous at ../include/net/sock.h:457:2) 
       480 |           struct sk_buff * sk_send_head
       480 |           struct rb_root tcp_rtx_queue
       480 |             struct rb_node * rb_node
       484 |         struct sk_buff_head sk_write_queue
       484 |           union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |             struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |               struct sk_buff * next
       488 |               struct sk_buff * prev
       484 |             struct sk_buff_list list
       484 |               struct sk_buff * next
       488 |               struct sk_buff * prev
       492 |           __u32 qlen
       496 |           struct spinlock lock
       496 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       496 |               struct raw_spinlock rlock
       496 |                 arch_spinlock_t raw_lock
       496 |                   volatile unsigned int slock
       500 |                 unsigned int magic
       504 |                 unsigned int owner_cpu
       508 |                 void * owner
       512 |                 struct lockdep_map dep_map
       512 |                   struct lock_class_key * key
       516 |                   struct lock_class *[2] class_cache
       524 |                   const char * name
       528 |                   u8 wait_type_outer
       529 |                   u8 wait_type_inner
       530 |                   u8 lock_type
       532 |                   int cpu
       536 |                   unsigned long ip
       496 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       496 |                 u8[16] __padding
       512 |                 struct lockdep_map dep_map
       512 |                   struct lock_class_key * key
       516 |                   struct lock_class *[2] class_cache
       524 |                   const char * name
       528 |                   u8 wait_type_outer
       529 |                   u8 wait_type_inner
       530 |                   u8 lock_type
       532 |                   int cpu
       536 |                   unsigned long ip
       540 |         u32 sk_dst_pending_confirm
       544 |         u32 sk_pacing_status
       548 |         struct page_frag sk_frag
       548 |           struct page * page
       552 |           __u16 offset
       554 |           __u16 size
       556 |         struct timer_list sk_timer
       556 |           struct hlist_node entry
       556 |             struct hlist_node * next
       560 |             struct hlist_node ** pprev
       564 |           unsigned long expires
       568 |           void (*)(struct timer_list *) function
       572 |           u32 flags
       576 |           struct lockdep_map lockdep_map
       576 |             struct lock_class_key * key
       580 |             struct lock_class *[2] class_cache
       588 |             const char * name
       592 |             u8 wait_type_outer
       593 |             u8 wait_type_inner
       594 |             u8 lock_type
       596 |             int cpu
       600 |             unsigned long ip
       604 |         unsigned long sk_pacing_rate
       608 |         atomic_t sk_zckey
       608 |           int counter
       612 |         atomic_t sk_tskey
       612 |           int counter
       616 |         __u8[0] __cacheline_group_end__sock_write_tx
       616 |         __u8[0] __cacheline_group_begin__sock_read_tx
       616 |         unsigned long sk_max_pacing_rate
       620 |         long sk_sndtimeo
       624 |         u32 sk_priority
       628 |         u32 sk_mark
       632 |         struct dst_entry * sk_dst_cache
       640 |         netdev_features_t sk_route_caps
       648 |         struct sk_buff *(*)(struct sock *, struct net_device *, struct sk_buff *) sk_validate_xmit_skb
       652 |         u16 sk_gso_type
       654 |         u16 sk_gso_max_segs
       656 |         unsigned int sk_gso_max_size
       660 |         gfp_t sk_allocation
       664 |         u32 sk_txhash
       668 |         u8 sk_pacing_shift
       669 |         bool sk_use_task_frag
       670 |         __u8[0] __cacheline_group_end__sock_read_tx
   670:0-0 |         u8 sk_gso_disabled
   670:1-1 |         u8 sk_kern_sock
   670:2-2 |         u8 sk_no_check_tx
   670:3-3 |         u8 sk_no_check_rx
       671 |         u8 sk_shutdown
       672 |         u16 sk_type
       674 |         u16 sk_protocol
       676 |         unsigned long sk_lingertime
       680 |         struct proto * sk_prot_creator
       684 |         rwlock_t sk_callback_lock
       684 |           arch_rwlock_t raw_lock
       684 |           unsigned int magic
       688 |           unsigned int owner_cpu
       692 |           void * owner
       696 |           struct lockdep_map dep_map
       696 |             struct lock_class_key * key
       700 |             struct lock_class *[2] class_cache
       708 |             const char * name
       712 |             u8 wait_type_outer
       713 |             u8 wait_type_inner
       714 |             u8 lock_type
       716 |             int cpu
       720 |             unsigned long ip
       724 |         int sk_err_soft
       728 |         u32 sk_ack_backlog
       732 |         u32 sk_max_ack_backlog
       736 |         kuid_t sk_uid
       736 |           uid_t val
       740 |         struct spinlock sk_peer_lock
       740 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       740 |             struct raw_spinlock rlock
       740 |               arch_spinlock_t raw_lock
       740 |                 volatile unsigned int slock
       744 |               unsigned int magic
       748 |               unsigned int owner_cpu
       752 |               void * owner
       756 |               struct lockdep_map dep_map
       756 |                 struct lock_class_key * key
       760 |                 struct lock_class *[2] class_cache
       768 |                 const char * name
       772 |                 u8 wait_type_outer
       773 |                 u8 wait_type_inner
       774 |                 u8 lock_type
       776 |                 int cpu
       780 |                 unsigned long ip
       740 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       740 |               u8[16] __padding
       756 |               struct lockdep_map dep_map
       756 |                 struct lock_class_key * key
       760 |                 struct lock_class *[2] class_cache
       768 |                 const char * name
       772 |                 u8 wait_type_outer
       773 |                 u8 wait_type_inner
       774 |                 u8 lock_type
       776 |                 int cpu
       780 |                 unsigned long ip
       784 |         int sk_bind_phc
       788 |         struct pid * sk_peer_pid
       792 |         const struct cred * sk_peer_cred
       800 |         ktime_t sk_stamp
       808 |         seqlock_t sk_stamp_seq
       808 |           struct seqcount_spinlock seqcount
       808 |             struct seqcount seqcount
       808 |               unsigned int sequence
       812 |               struct lockdep_map dep_map
       812 |                 struct lock_class_key * key
       816 |                 struct lock_class *[2] class_cache
       824 |                 const char * name
       828 |                 u8 wait_type_outer
       829 |                 u8 wait_type_inner
       830 |                 u8 lock_type
       832 |                 int cpu
       836 |                 unsigned long ip
       840 |             spinlock_t * lock
       844 |           struct spinlock lock
       844 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |               struct raw_spinlock rlock
       844 |                 arch_spinlock_t raw_lock
       844 |                   volatile unsigned int slock
       848 |                 unsigned int magic
       852 |                 unsigned int owner_cpu
       856 |                 void * owner
       860 |                 struct lockdep_map dep_map
       860 |                   struct lock_class_key * key
       864 |                   struct lock_class *[2] class_cache
       872 |                   const char * name
       876 |                   u8 wait_type_outer
       877 |                   u8 wait_type_inner
       878 |                   u8 lock_type
       880 |                   int cpu
       884 |                   unsigned long ip
       844 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |                 u8[16] __padding
       860 |                 struct lockdep_map dep_map
       860 |                   struct lock_class_key * key
       864 |                   struct lock_class *[2] class_cache
       872 |                   const char * name
       876 |                   u8 wait_type_outer
       877 |                   u8 wait_type_inner
       878 |                   u8 lock_type
       880 |                   int cpu
       884 |                   unsigned long ip
       888 |         int sk_disconnects
       892 |         u8 sk_txrehash
       893 |         u8 sk_clockid
   894:0-0 |         u8 sk_txtime_deadline_mode
   894:1-1 |         u8 sk_txtime_report_errors
   894:2-7 |         u8 sk_txtime_unused
       896 |         void * sk_user_data
       900 |         struct sock_cgroup_data sk_cgrp_data
       900 |         void (*)(struct sock *) sk_state_change
       904 |         void (*)(struct sock *) sk_write_space
       908 |         void (*)(struct sock *) sk_error_report
       912 |         int (*)(struct sock *, struct sk_buff *) sk_backlog_rcv
       916 |         void (*)(struct sock *) sk_destruct
       920 |         struct sock_reuseport * sk_reuseport_cb
       924 |         struct bpf_local_storage * sk_bpf_storage
       928 |         struct callback_head sk_rcu
       928 |           struct callback_head * next
       932 |           void (*)(struct callback_head *) func
       936 |         netns_tracker ns_tracker
       944 |       struct ipv6_pinfo * pinet6
       948 |       unsigned long inet_flags
       952 |       __be32 inet_saddr
       956 |       __s16 uc_ttl
       958 |       __be16 inet_sport
       960 |       struct ip_options_rcu * inet_opt
       964 |       atomic_t inet_id
       964 |         int counter
       968 |       __u8 tos
       969 |       __u8 min_ttl
       970 |       __u8 mc_ttl
       971 |       __u8 pmtudisc
       972 |       __u8 rcv_tos
       973 |       __u8 convert_csum
       976 |       int uc_index
       980 |       int mc_index
       984 |       __be32 mc_addr
       988 |       u32 local_port_range
       992 |       struct ip_mc_socklist * mc_list
      1000 |       struct inet_cork_full cork
      1000 |         struct inet_cork base
      1000 |           unsigned int flags
      1004 |           __be32 addr
      1008 |           struct ip_options * opt
      1012 |           unsigned int fragsize
      1016 |           int length
      1020 |           struct dst_entry * dst
      1024 |           u8 tx_flags
      1025 |           __u8 ttl
      1026 |           __s16 tos
      1028 |           char priority
      1030 |           __u16 gso_size
      1032 |           u64 transmit_time
      1040 |           u32 mark
      1048 |         struct flowi fl
      1048 |           union flowi::(unnamed at ../include/net/flow.h:155:2) u
      1048 |             struct flowi_common __fl_common
      1048 |               int flowic_oif
      1052 |               int flowic_iif
      1056 |               int flowic_l3mdev
      1060 |               __u32 flowic_mark
      1064 |               __u8 flowic_tos
      1065 |               __u8 flowic_scope
      1066 |               __u8 flowic_proto
      1067 |               __u8 flowic_flags
      1068 |               __u32 flowic_secid
      1072 |               kuid_t flowic_uid
      1072 |                 uid_t val
      1076 |               __u32 flowic_multipath_hash
      1080 |               struct flowi_tunnel flowic_tun_key
      1080 |                 __be64 tun_id
      1048 |             struct flowi4 ip4
      1048 |               struct flowi_common __fl_common
      1048 |                 int flowic_oif
      1052 |                 int flowic_iif
      1056 |                 int flowic_l3mdev
      1060 |                 __u32 flowic_mark
      1064 |                 __u8 flowic_tos
      1065 |                 __u8 flowic_scope
      1066 |                 __u8 flowic_proto
      1067 |                 __u8 flowic_flags
      1068 |                 __u32 flowic_secid
      1072 |                 kuid_t flowic_uid
      1072 |                   uid_t val
      1076 |                 __u32 flowic_multipath_hash
      1080 |                 struct flowi_tunnel flowic_tun_key
      1080 |                   __be64 tun_id
      1088 |               __be32 saddr
      1092 |               __be32 daddr
      1096 |               union flowi_uli uli
      1096 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1096 |                   __be16 dport
      1098 |                   __be16 sport
      1096 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1096 |                   __u8 type
      1097 |                   __u8 code
      1096 |                 __be32 gre_key
      1096 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1096 |                   __u8 type
      1048 |             struct flowi6 ip6
      1048 |               struct flowi_common __fl_common
      1048 |                 int flowic_oif
      1052 |                 int flowic_iif
      1056 |                 int flowic_l3mdev
      1060 |                 __u32 flowic_mark
      1064 |                 __u8 flowic_tos
      1065 |                 __u8 flowic_scope
      1066 |                 __u8 flowic_proto
      1067 |                 __u8 flowic_flags
      1068 |                 __u32 flowic_secid
      1072 |                 kuid_t flowic_uid
      1072 |                   uid_t val
      1076 |                 __u32 flowic_multipath_hash
      1080 |                 struct flowi_tunnel flowic_tun_key
      1080 |                   __be64 tun_id
      1088 |               struct in6_addr daddr
      1088 |                 union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1088 |                   __u8[16] u6_addr8
      1088 |                   __be16[8] u6_addr16
      1088 |                   __be32[4] u6_addr32
      1104 |               struct in6_addr saddr
      1104 |                 union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1104 |                   __u8[16] u6_addr8
      1104 |                   __be16[8] u6_addr16
      1104 |                   __be32[4] u6_addr32
      1120 |               __be32 flowlabel
      1124 |               union flowi_uli uli
      1124 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1124 |                   __be16 dport
      1126 |                   __be16 sport
      1124 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1124 |                   __u8 type
      1125 |                   __u8 code
      1124 |                 __be32 gre_key
      1124 |                 struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1124 |                   __u8 type
      1128 |               __u32 mp_hash
      1136 |     struct request_sock_queue icsk_accept_queue
      1136 |       struct spinlock rskq_lock
      1136 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1136 |           struct raw_spinlock rlock
      1136 |             arch_spinlock_t raw_lock
      1136 |               volatile unsigned int slock
      1140 |             unsigned int magic
      1144 |             unsigned int owner_cpu
      1148 |             void * owner
      1152 |             struct lockdep_map dep_map
      1152 |               struct lock_class_key * key
      1156 |               struct lock_class *[2] class_cache
      1164 |               const char * name
      1168 |               u8 wait_type_outer
      1169 |               u8 wait_type_inner
      1170 |               u8 lock_type
      1172 |               int cpu
      1176 |               unsigned long ip
      1136 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1136 |             u8[16] __padding
      1152 |             struct lockdep_map dep_map
      1152 |               struct lock_class_key * key
      1156 |               struct lock_class *[2] class_cache
      1164 |               const char * name
      1168 |               u8 wait_type_outer
      1169 |               u8 wait_type_inner
      1170 |               u8 lock_type
      1172 |               int cpu
      1176 |               unsigned long ip
      1180 |       u8 rskq_defer_accept
      1184 |       u32 synflood_warned
      1188 |       atomic_t qlen
      1188 |         int counter
      1192 |       atomic_t young
      1192 |         int counter
      1196 |       struct request_sock * rskq_accept_head
      1200 |       struct request_sock * rskq_accept_tail
      1204 |       struct fastopen_queue fastopenq
      1204 |         struct request_sock * rskq_rst_head
      1208 |         struct request_sock * rskq_rst_tail
      1212 |         struct spinlock lock
      1212 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1212 |             struct raw_spinlock rlock
      1212 |               arch_spinlock_t raw_lock
      1212 |                 volatile unsigned int slock
      1216 |               unsigned int magic
      1220 |               unsigned int owner_cpu
      1224 |               void * owner
      1228 |               struct lockdep_map dep_map
      1228 |                 struct lock_class_key * key
      1232 |                 struct lock_class *[2] class_cache
      1240 |                 const char * name
      1244 |                 u8 wait_type_outer
      1245 |                 u8 wait_type_inner
      1246 |                 u8 lock_type
      1248 |                 int cpu
      1252 |                 unsigned long ip
      1212 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1212 |               u8[16] __padding
      1228 |               struct lockdep_map dep_map
      1228 |                 struct lock_class_key * key
      1232 |                 struct lock_class *[2] class_cache
      1240 |                 const char * name
      1244 |                 u8 wait_type_outer
      1245 |                 u8 wait_type_inner
      1246 |                 u8 lock_type
      1248 |                 int cpu
      1252 |                 unsigned long ip
      1256 |         int qlen
      1260 |         int max_qlen
      1264 |         struct tcp_fastopen_context * ctx
      1268 |     struct inet_bind_bucket * icsk_bind_hash
      1272 |     struct inet_bind2_bucket * icsk_bind2_hash
      1276 |     unsigned long icsk_timeout
      1280 |     struct timer_list icsk_retransmit_timer
      1280 |       struct hlist_node entry
      1280 |         struct hlist_node * next
      1284 |         struct hlist_node ** pprev
      1288 |       unsigned long expires
      1292 |       void (*)(struct timer_list *) function
      1296 |       u32 flags
      1300 |       struct lockdep_map lockdep_map
      1300 |         struct lock_class_key * key
      1304 |         struct lock_class *[2] class_cache
      1312 |         const char * name
      1316 |         u8 wait_type_outer
      1317 |         u8 wait_type_inner
      1318 |         u8 lock_type
      1320 |         int cpu
      1324 |         unsigned long ip
      1328 |     struct timer_list icsk_delack_timer
      1328 |       struct hlist_node entry
      1328 |         struct hlist_node * next
      1332 |         struct hlist_node ** pprev
      1336 |       unsigned long expires
      1340 |       void (*)(struct timer_list *) function
      1344 |       u32 flags
      1348 |       struct lockdep_map lockdep_map
      1348 |         struct lock_class_key * key
      1352 |         struct lock_class *[2] class_cache
      1360 |         const char * name
      1364 |         u8 wait_type_outer
      1365 |         u8 wait_type_inner
      1366 |         u8 lock_type
      1368 |         int cpu
      1372 |         unsigned long ip
      1376 |     __u32 icsk_rto
      1380 |     __u32 icsk_rto_min
      1384 |     __u32 icsk_delack_max
      1388 |     __u32 icsk_pmtu_cookie
      1392 |     const struct tcp_congestion_ops * icsk_ca_ops
      1396 |     const struct inet_connection_sock_af_ops * icsk_af_ops
      1400 |     const struct tcp_ulp_ops * icsk_ulp_ops
      1404 |     void * icsk_ulp_data
      1408 |     void (*)(struct sock *, u32) icsk_clean_acked
      1412 |     unsigned int (*)(struct sock *, u32) icsk_sync_mss
  1416:0-4 |     __u8 icsk_ca_state
  1416:5-5 |     __u8 icsk_ca_initialized
  1416:6-6 |     __u8 icsk_ca_setsockopt
  1416:7-7 |     __u8 icsk_ca_dst_locked
      1417 |     __u8 icsk_retransmits
      1418 |     __u8 icsk_pending
      1419 |     __u8 icsk_backoff
      1420 |     __u8 icsk_syn_retries
      1421 |     __u8 icsk_probes_out
      1422 |     __u16 icsk_ext_hdr_len
      1424 |     struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:111:2) icsk_ack
      1424 |       __u8 pending
      1425 |       __u8 quick
      1426 |       __u8 pingpong
      1427 |       __u8 retry
  1428:0-7 |       __u32 ato
 1429:0-19 |       __u32 lrcv_flowlabel
  1431:4-7 |       __u32 unused
      1432 |       unsigned long timeout
      1436 |       __u32 lrcvtime
      1440 |       __u16 last_seg_size
      1442 |       __u16 rcv_mss
      1444 |     struct inet_connection_sock::(unnamed at ../include/net/inet_connection_sock.h:125:2) icsk_mtup
      1444 |       int search_high
      1448 |       int search_low
 1452:0-30 |       u32 probe_size
  1455:7-7 |       u32 enabled
      1456 |       u32 probe_timestamp
      1460 |     u32 icsk_probes_tstamp
      1464 |     u32 icsk_user_timeout
      1472 |     u64[13] icsk_ca_priv
      1576 |   __u8[0] __cacheline_group_begin__tcp_sock_read_tx
      1576 |   u32 max_window
      1580 |   u32 rcv_ssthresh
      1584 |   u32 reordering
      1588 |   u32 notsent_lowat
      1592 |   u16 gso_segs
      1596 |   struct sk_buff * lost_skb_hint
      1600 |   struct sk_buff * retransmit_skb_hint
      1604 |   __u8[0] __cacheline_group_end__tcp_sock_read_tx
      1604 |   __u8[0] __cacheline_group_begin__tcp_sock_read_txrx
      1604 |   u32 tsoffset
      1608 |   u32 snd_wnd
      1612 |   u32 mss_cache
      1616 |   u32 snd_cwnd
      1620 |   u32 prr_out
      1624 |   u32 lost_out
      1628 |   u32 sacked_out
      1632 |   u16 tcp_header_len
      1634 |   u8 scaling_ratio
  1635:0-1 |   u8 chrono_type
  1635:2-2 |   u8 repair
  1635:3-3 |   u8 tcp_usec_ts
  1635:4-4 |   u8 is_sack_reneg
  1635:5-5 |   u8 is_cwnd_limited
      1636 |   __u8[0] __cacheline_group_end__tcp_sock_read_txrx
      1636 |   __u8[0] __cacheline_group_begin__tcp_sock_read_rx
      1636 |   u32 copied_seq
      1640 |   u32 rcv_tstamp
      1644 |   u32 snd_wl1
      1648 |   u32 tlp_high_seq
      1652 |   u32 rttvar_us
      1656 |   u32 retrans_out
      1660 |   u16 advmss
      1662 |   u16 urg_data
      1664 |   u32 lost
      1668 |   struct minmax rtt_min
      1668 |     struct minmax_sample[3] s
      1692 |   struct rb_root out_of_order_queue
      1692 |     struct rb_node * rb_node
      1696 |   u32 snd_ssthresh
  1700:0-0 |   u8 recvmsg_inq
      1701 |   __u8[0] __cacheline_group_end__tcp_sock_read_rx
      1728 |   __u8[0] __cacheline_group_begin__tcp_sock_write_tx
      1728 |   u32 segs_out
      1732 |   u32 data_segs_out
      1736 |   u64 bytes_sent
      1744 |   u32 snd_sml
      1748 |   u32 chrono_start
      1752 |   u32[3] chrono_stat
      1764 |   u32 write_seq
      1768 |   u32 pushed_seq
      1772 |   u32 lsndtime
      1776 |   u32 mdev_us
      1780 |   u32 rtt_seq
      1784 |   u64 tcp_wstamp_ns
      1792 |   struct list_head tsorted_sent_queue
      1792 |     struct list_head * next
      1796 |     struct list_head * prev
      1800 |   struct sk_buff * highest_sack
      1804 |   u8 ecn_flags
      1805 |   __u8[0] __cacheline_group_end__tcp_sock_write_tx
      1805 |   __u8[0] __cacheline_group_begin__tcp_sock_write_txrx
      1808 |   __be32 pred_flags
      1816 |   u64 tcp_clock_cache
      1824 |   u64 tcp_mstamp
      1832 |   u32 rcv_nxt
      1836 |   u32 snd_nxt
      1840 |   u32 snd_una
      1844 |   u32 window_clamp
      1848 |   u32 srtt_us
      1852 |   u32 packets_out
      1856 |   u32 snd_up
      1860 |   u32 delivered
      1864 |   u32 delivered_ce
      1868 |   u32 app_limited
      1872 |   u32 rcv_wnd
      1876 |   struct tcp_options_received rx_opt
      1876 |     int ts_recent_stamp
      1880 |     u32 ts_recent
      1884 |     u32 rcv_tsval
      1888 |     u32 rcv_tsecr
  1892:0-0 |     u16 saw_tstamp
  1892:1-1 |     u16 tstamp_ok
  1892:2-2 |     u16 dsack
  1892:3-3 |     u16 wscale_ok
  1892:4-6 |     u16 sack_ok
  1892:7-7 |     u16 smc_ok
  1893:0-3 |     u16 snd_wscale
  1893:4-7 |     u16 rcv_wscale
  1894:0-0 |     u8 saw_unknown
  1894:1-7 |     u8 unused
      1895 |     u8 num_sacks
      1896 |     u16 user_mss
      1898 |     u16 mss_clamp
  1900:0-3 |   u8 nonagle
  1900:4-4 |   u8 rate_app_limited
      1901 |   __u8[0] __cacheline_group_end__tcp_sock_write_txrx
      1904 |   __u8[0] __cacheline_group_begin__tcp_sock_write_rx
      1904 |   u64 bytes_received
      1912 |   u32 segs_in
      1916 |   u32 data_segs_in
      1920 |   u32 rcv_wup
      1924 |   u32 max_packets_out
      1928 |   u32 cwnd_usage_seq
      1932 |   u32 rate_delivered
      1936 |   u32 rate_interval_us
      1940 |   u32 rcv_rtt_last_tsecr
      1944 |   u64 first_tx_mstamp
      1952 |   u64 delivered_mstamp
      1960 |   u64 bytes_acked
      1968 |   struct tcp_sock::(unnamed at ../include/linux/tcp.h:332:2) rcv_rtt_est
      1968 |     u32 rtt_us
      1972 |     u32 seq
      1976 |     u64 time
      1984 |   struct tcp_sock::(unnamed at ../include/linux/tcp.h:338:2) rcvq_space
      1984 |     u32 space
      1988 |     u32 seq
      1992 |     u64 time
      2000 |   __u8[0] __cacheline_group_end__tcp_sock_write_rx
      2000 |   u32 dsack_dups
      2004 |   u32 compressed_ack_rcv_nxt
      2008 |   struct list_head tsq_node
      2008 |     struct list_head * next
      2012 |     struct list_head * prev
      2016 |   struct tcp_rack rack
      2016 |     u64 mstamp
      2024 |     u32 rtt_us
      2028 |     u32 end_seq
      2032 |     u32 last_delivered
      2036 |     u8 reo_wnd_steps
  2037:0-4 |     u8 reo_wnd_persist
  2037:5-5 |     u8 dsack_seen
  2037:6-6 |     u8 advanced
      2040 |   u8 compressed_ack
  2041:0-1 |   u8 dup_ack_counter
  2041:2-2 |   u8 tlp_retrans
  2041:3-7 |   u8 unused
  2042:0-0 |   u8 thin_lto
  2042:1-1 |   u8 fastopen_connect
  2042:2-2 |   u8 fastopen_no_cookie
  2042:3-4 |   u8 fastopen_client_fail
  2042:5-5 |   u8 frto
      2043 |   u8 repair_queue
  2044:0-1 |   u8 save_syn
  2044:2-2 |   u8 syn_data
  2044:3-3 |   u8 syn_fastopen
  2044:4-4 |   u8 syn_fastopen_exp
  2044:5-5 |   u8 syn_fastopen_ch
  2044:6-6 |   u8 syn_data_acked
      2045 |   u8 keepalive_probes
      2048 |   u32 tcp_tx_delay
      2052 |   u32 mdev_max_us
      2056 |   u32 reord_seen
      2060 |   u32 snd_cwnd_cnt
      2064 |   u32 snd_cwnd_clamp
      2068 |   u32 snd_cwnd_used
      2072 |   u32 snd_cwnd_stamp
      2076 |   u32 prior_cwnd
      2080 |   u32 prr_delivered
      2084 |   u32 last_oow_ack_time
      2088 |   struct hrtimer pacing_timer
      2088 |     struct timerqueue_node node
      2088 |       struct rb_node node
      2088 |         unsigned long __rb_parent_color
      2092 |         struct rb_node * rb_right
      2096 |         struct rb_node * rb_left
      2104 |       ktime_t expires
      2112 |     ktime_t _softexpires
      2120 |     enum hrtimer_restart (*)(struct hrtimer *) function
      2124 |     struct hrtimer_clock_base * base
      2128 |     u8 state
      2129 |     u8 is_rel
      2130 |     u8 is_soft
      2131 |     u8 is_hard
      2136 |   struct hrtimer compressed_ack_timer
      2136 |     struct timerqueue_node node
      2136 |       struct rb_node node
      2136 |         unsigned long __rb_parent_color
      2140 |         struct rb_node * rb_right
      2144 |         struct rb_node * rb_left
      2152 |       ktime_t expires
      2160 |     ktime_t _softexpires
      2168 |     enum hrtimer_restart (*)(struct hrtimer *) function
      2172 |     struct hrtimer_clock_base * base
      2176 |     u8 state
      2177 |     u8 is_rel
      2178 |     u8 is_soft
      2179 |     u8 is_hard
      2184 |   struct sk_buff * ooo_last_skb
      2188 |   struct tcp_sack_block[1] duplicate_sack
      2196 |   struct tcp_sack_block[4] selective_acks
      2228 |   struct tcp_sack_block[4] recv_sack_cache
      2260 |   int lost_cnt_hint
      2264 |   u32 prior_ssthresh
      2268 |   u32 high_seq
      2272 |   u32 retrans_stamp
      2276 |   u32 undo_marker
      2280 |   int undo_retrans
      2288 |   u64 bytes_retrans
      2296 |   u32 total_retrans
      2300 |   u32 rto_stamp
      2304 |   u16 total_rto
      2306 |   u16 total_rto_recoveries
      2308 |   u32 total_rto_time
      2312 |   u32 urg_seq
      2316 |   unsigned int keepalive_time
      2320 |   unsigned int keepalive_intvl
      2324 |   int linger2
      2328 |   u8 bpf_sock_ops_cb_flags
  2329:0-0 |   u8 bpf_chg_cc_inprogress
      2330 |   u16 timeout_rehash
      2332 |   u32 rcv_ooopack
      2336 |   struct tcp_sock::(unnamed at ../include/linux/tcp.h:468:2) mtu_probe
      2336 |     u32 probe_seq_start
      2340 |     u32 probe_seq_end
      2344 |   u32 plb_rehash
      2348 |   u32 mtu_info
      2352 |   bool syn_smc
      2356 |   bool (*)(const struct sock *) smc_hs_congested
      2360 |   struct tcp_fastopen_request * fastopen_req
      2364 |   struct request_sock * fastopen_rsk
      2368 |   struct saved_syn * saved_syn
           | [sizeof=2400, align=32]

*** Dumping AST Record Layout
         0 | struct udp_sock
         0 |   struct inet_sock inet
         0 |     struct sock sk
         0 |       struct sock_common __sk_common
         0 |         union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |           __addrpair skc_addrpair
         0 |           struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |             __be32 skc_daddr
         4 |             __be32 skc_rcv_saddr
         8 |         union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |           unsigned int skc_hash
         8 |           __u16[2] skc_u16hashes
        12 |         union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |           __portpair skc_portpair
        12 |           struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |             __be16 skc_dport
        14 |             __u16 skc_num
        16 |         unsigned short skc_family
        18 |         volatile unsigned char skc_state
    19:0-3 |         unsigned char skc_reuse
    19:4-4 |         unsigned char skc_reuseport
    19:5-5 |         unsigned char skc_ipv6only
    19:6-6 |         unsigned char skc_net_refcnt
        20 |         int skc_bound_dev_if
        24 |         union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |           struct hlist_node skc_bind_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        24 |           struct hlist_node skc_portaddr_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        32 |         struct proto * skc_prot
        36 |         possible_net_t skc_net
        36 |           struct net * net
        40 |         struct in6_addr skc_v6_daddr
        40 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |             __u8[16] u6_addr8
        40 |             __be16[8] u6_addr16
        40 |             __be32[4] u6_addr32
        56 |         struct in6_addr skc_v6_rcv_saddr
        56 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |             __u8[16] u6_addr8
        56 |             __be16[8] u6_addr16
        56 |             __be32[4] u6_addr32
        72 |         atomic64_t skc_cookie
        72 |           s64 counter
        80 |         union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |           unsigned long skc_flags
        80 |           struct sock * skc_listener
        80 |           struct inet_timewait_death_row * skc_tw_dr
        84 |         int[0] skc_dontcopy_begin
        84 |         union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |           struct hlist_node skc_node
        84 |             struct hlist_node * next
        88 |             struct hlist_node ** pprev
        84 |           struct hlist_nulls_node skc_nulls_node
        84 |             struct hlist_nulls_node * next
        88 |             struct hlist_nulls_node ** pprev
        92 |         unsigned short skc_tx_queue_mapping
        94 |         unsigned short skc_rx_queue_mapping
        96 |         union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |           int skc_incoming_cpu
        96 |           u32 skc_rcv_wnd
        96 |           u32 skc_tw_rcv_nxt
       100 |         struct refcount_struct skc_refcnt
       100 |           atomic_t refs
       100 |             int counter
       104 |         int[0] skc_dontcopy_end
       104 |         union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |           u32 skc_rxhash
       104 |           u32 skc_window_clamp
       104 |           u32 skc_tw_snd_nxt
       112 |       __u8[0] __cacheline_group_begin__sock_write_rx
       112 |       atomic_t sk_drops
       112 |         int counter
       116 |       __s32 sk_peek_off
       120 |       struct sk_buff_head sk_error_queue
       120 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       120 |             struct sk_buff * next
       124 |             struct sk_buff * prev
       120 |           struct sk_buff_list list
       120 |             struct sk_buff * next
       124 |             struct sk_buff * prev
       128 |         __u32 qlen
       132 |         struct spinlock lock
       132 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       132 |             struct raw_spinlock rlock
       132 |               arch_spinlock_t raw_lock
       132 |                 volatile unsigned int slock
       136 |               unsigned int magic
       140 |               unsigned int owner_cpu
       144 |               void * owner
       148 |               struct lockdep_map dep_map
       148 |                 struct lock_class_key * key
       152 |                 struct lock_class *[2] class_cache
       160 |                 const char * name
       164 |                 u8 wait_type_outer
       165 |                 u8 wait_type_inner
       166 |                 u8 lock_type
       168 |                 int cpu
       172 |                 unsigned long ip
       132 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       132 |               u8[16] __padding
       148 |               struct lockdep_map dep_map
       148 |                 struct lock_class_key * key
       152 |                 struct lock_class *[2] class_cache
       160 |                 const char * name
       164 |                 u8 wait_type_outer
       165 |                 u8 wait_type_inner
       166 |                 u8 lock_type
       168 |                 int cpu
       172 |                 unsigned long ip
       176 |       struct sk_buff_head sk_receive_queue
       176 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       176 |             struct sk_buff * next
       180 |             struct sk_buff * prev
       176 |           struct sk_buff_list list
       176 |             struct sk_buff * next
       180 |             struct sk_buff * prev
       184 |         __u32 qlen
       188 |         struct spinlock lock
       188 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       188 |             struct raw_spinlock rlock
       188 |               arch_spinlock_t raw_lock
       188 |                 volatile unsigned int slock
       192 |               unsigned int magic
       196 |               unsigned int owner_cpu
       200 |               void * owner
       204 |               struct lockdep_map dep_map
       204 |                 struct lock_class_key * key
       208 |                 struct lock_class *[2] class_cache
       216 |                 const char * name
       220 |                 u8 wait_type_outer
       221 |                 u8 wait_type_inner
       222 |                 u8 lock_type
       224 |                 int cpu
       228 |                 unsigned long ip
       188 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       188 |               u8[16] __padding
       204 |               struct lockdep_map dep_map
       204 |                 struct lock_class_key * key
       208 |                 struct lock_class *[2] class_cache
       216 |                 const char * name
       220 |                 u8 wait_type_outer
       221 |                 u8 wait_type_inner
       222 |                 u8 lock_type
       224 |                 int cpu
       228 |                 unsigned long ip
       232 |       struct sock::(unnamed at ../include/net/sock.h:395:2) sk_backlog
       232 |         atomic_t rmem_alloc
       232 |           int counter
       236 |         int len
       240 |         struct sk_buff * head
       244 |         struct sk_buff * tail
       248 |       __u8[0] __cacheline_group_end__sock_write_rx
       248 |       __u8[0] __cacheline_group_begin__sock_read_rx
       248 |       struct dst_entry * sk_rx_dst
       252 |       int sk_rx_dst_ifindex
       256 |       u32 sk_rx_dst_cookie
       260 |       unsigned int sk_ll_usec
       264 |       unsigned int sk_napi_id
       268 |       u16 sk_busy_poll_budget
       270 |       u8 sk_prefer_busy_poll
       271 |       u8 sk_userlocks
       272 |       int sk_rcvbuf
       276 |       struct sk_filter * sk_filter
       280 |       union sock::(anonymous at ../include/net/sock.h:421:2) 
       280 |         struct socket_wq * sk_wq
       280 |         struct socket_wq * sk_wq_raw
       284 |       void (*)(struct sock *) sk_data_ready
       288 |       long sk_rcvtimeo
       292 |       int sk_rcvlowat
       296 |       __u8[0] __cacheline_group_end__sock_read_rx
       296 |       __u8[0] __cacheline_group_begin__sock_read_rxtx
       296 |       int sk_err
       300 |       struct socket * sk_socket
       304 |       struct mem_cgroup * sk_memcg
       308 |       struct xfrm_policy *[2] sk_policy
       316 |       __u8[0] __cacheline_group_end__sock_read_rxtx
       316 |       __u8[0] __cacheline_group_begin__sock_write_rxtx
       316 |       socket_lock_t sk_lock
       316 |         struct spinlock slock
       316 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       316 |             struct raw_spinlock rlock
       316 |               arch_spinlock_t raw_lock
       316 |                 volatile unsigned int slock
       320 |               unsigned int magic
       324 |               unsigned int owner_cpu
       328 |               void * owner
       332 |               struct lockdep_map dep_map
       332 |                 struct lock_class_key * key
       336 |                 struct lock_class *[2] class_cache
       344 |                 const char * name
       348 |                 u8 wait_type_outer
       349 |                 u8 wait_type_inner
       350 |                 u8 lock_type
       352 |                 int cpu
       356 |                 unsigned long ip
       316 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       316 |               u8[16] __padding
       332 |               struct lockdep_map dep_map
       332 |                 struct lock_class_key * key
       336 |                 struct lock_class *[2] class_cache
       344 |                 const char * name
       348 |                 u8 wait_type_outer
       349 |                 u8 wait_type_inner
       350 |                 u8 lock_type
       352 |                 int cpu
       356 |                 unsigned long ip
       360 |         int owned
       364 |         struct wait_queue_head wq
       364 |           struct spinlock lock
       364 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       364 |               struct raw_spinlock rlock
       364 |                 arch_spinlock_t raw_lock
       364 |                   volatile unsigned int slock
       368 |                 unsigned int magic
       372 |                 unsigned int owner_cpu
       376 |                 void * owner
       380 |                 struct lockdep_map dep_map
       380 |                   struct lock_class_key * key
       384 |                   struct lock_class *[2] class_cache
       392 |                   const char * name
       396 |                   u8 wait_type_outer
       397 |                   u8 wait_type_inner
       398 |                   u8 lock_type
       400 |                   int cpu
       404 |                   unsigned long ip
       364 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       364 |                 u8[16] __padding
       380 |                 struct lockdep_map dep_map
       380 |                   struct lock_class_key * key
       384 |                   struct lock_class *[2] class_cache
       392 |                   const char * name
       396 |                   u8 wait_type_outer
       397 |                   u8 wait_type_inner
       398 |                   u8 lock_type
       400 |                   int cpu
       404 |                   unsigned long ip
       408 |           struct list_head head
       408 |             struct list_head * next
       412 |             struct list_head * prev
       416 |         struct lockdep_map dep_map
       416 |           struct lock_class_key * key
       420 |           struct lock_class *[2] class_cache
       428 |           const char * name
       432 |           u8 wait_type_outer
       433 |           u8 wait_type_inner
       434 |           u8 lock_type
       436 |           int cpu
       440 |           unsigned long ip
       444 |       u32 sk_reserved_mem
       448 |       int sk_forward_alloc
       452 |       u32 sk_tsflags
       456 |       __u8[0] __cacheline_group_end__sock_write_rxtx
       456 |       __u8[0] __cacheline_group_begin__sock_write_tx
       456 |       int sk_write_pending
       460 |       atomic_t sk_omem_alloc
       460 |         int counter
       464 |       int sk_sndbuf
       468 |       int sk_wmem_queued
       472 |       struct refcount_struct sk_wmem_alloc
       472 |         atomic_t refs
       472 |           int counter
       476 |       unsigned long sk_tsq_flags
       480 |       union sock::(anonymous at ../include/net/sock.h:457:2) 
       480 |         struct sk_buff * sk_send_head
       480 |         struct rb_root tcp_rtx_queue
       480 |           struct rb_node * rb_node
       484 |       struct sk_buff_head sk_write_queue
       484 |         union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |           struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       484 |             struct sk_buff * next
       488 |             struct sk_buff * prev
       484 |           struct sk_buff_list list
       484 |             struct sk_buff * next
       488 |             struct sk_buff * prev
       492 |         __u32 qlen
       496 |         struct spinlock lock
       496 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       496 |             struct raw_spinlock rlock
       496 |               arch_spinlock_t raw_lock
       496 |                 volatile unsigned int slock
       500 |               unsigned int magic
       504 |               unsigned int owner_cpu
       508 |               void * owner
       512 |               struct lockdep_map dep_map
       512 |                 struct lock_class_key * key
       516 |                 struct lock_class *[2] class_cache
       524 |                 const char * name
       528 |                 u8 wait_type_outer
       529 |                 u8 wait_type_inner
       530 |                 u8 lock_type
       532 |                 int cpu
       536 |                 unsigned long ip
       496 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       496 |               u8[16] __padding
       512 |               struct lockdep_map dep_map
       512 |                 struct lock_class_key * key
       516 |                 struct lock_class *[2] class_cache
       524 |                 const char * name
       528 |                 u8 wait_type_outer
       529 |                 u8 wait_type_inner
       530 |                 u8 lock_type
       532 |                 int cpu
       536 |                 unsigned long ip
       540 |       u32 sk_dst_pending_confirm
       544 |       u32 sk_pacing_status
       548 |       struct page_frag sk_frag
       548 |         struct page * page
       552 |         __u16 offset
       554 |         __u16 size
       556 |       struct timer_list sk_timer
       556 |         struct hlist_node entry
       556 |           struct hlist_node * next
       560 |           struct hlist_node ** pprev
       564 |         unsigned long expires
       568 |         void (*)(struct timer_list *) function
       572 |         u32 flags
       576 |         struct lockdep_map lockdep_map
       576 |           struct lock_class_key * key
       580 |           struct lock_class *[2] class_cache
       588 |           const char * name
       592 |           u8 wait_type_outer
       593 |           u8 wait_type_inner
       594 |           u8 lock_type
       596 |           int cpu
       600 |           unsigned long ip
       604 |       unsigned long sk_pacing_rate
       608 |       atomic_t sk_zckey
       608 |         int counter
       612 |       atomic_t sk_tskey
       612 |         int counter
       616 |       __u8[0] __cacheline_group_end__sock_write_tx
       616 |       __u8[0] __cacheline_group_begin__sock_read_tx
       616 |       unsigned long sk_max_pacing_rate
       620 |       long sk_sndtimeo
       624 |       u32 sk_priority
       628 |       u32 sk_mark
       632 |       struct dst_entry * sk_dst_cache
       640 |       netdev_features_t sk_route_caps
       648 |       struct sk_buff *(*)(struct sock *, struct net_device *, struct sk_buff *) sk_validate_xmit_skb
       652 |       u16 sk_gso_type
       654 |       u16 sk_gso_max_segs
       656 |       unsigned int sk_gso_max_size
       660 |       gfp_t sk_allocation
       664 |       u32 sk_txhash
       668 |       u8 sk_pacing_shift
       669 |       bool sk_use_task_frag
       670 |       __u8[0] __cacheline_group_end__sock_read_tx
   670:0-0 |       u8 sk_gso_disabled
   670:1-1 |       u8 sk_kern_sock
   670:2-2 |       u8 sk_no_check_tx
   670:3-3 |       u8 sk_no_check_rx
       671 |       u8 sk_shutdown
       672 |       u16 sk_type
       674 |       u16 sk_protocol
       676 |       unsigned long sk_lingertime
       680 |       struct proto * sk_prot_creator
       684 |       rwlock_t sk_callback_lock
       684 |         arch_rwlock_t raw_lock
       684 |         unsigned int magic
       688 |         unsigned int owner_cpu
       692 |         void * owner
       696 |         struct lockdep_map dep_map
       696 |           struct lock_class_key * key
       700 |           struct lock_class *[2] class_cache
       708 |           const char * name
       712 |           u8 wait_type_outer
       713 |           u8 wait_type_inner
       714 |           u8 lock_type
       716 |           int cpu
       720 |           unsigned long ip
       724 |       int sk_err_soft
       728 |       u32 sk_ack_backlog
       732 |       u32 sk_max_ack_backlog
       736 |       kuid_t sk_uid
       736 |         uid_t val
       740 |       struct spinlock sk_peer_lock
       740 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       740 |           struct raw_spinlock rlock
       740 |             arch_spinlock_t raw_lock
       740 |               volatile unsigned int slock
       744 |             unsigned int magic
       748 |             unsigned int owner_cpu
       752 |             void * owner
       756 |             struct lockdep_map dep_map
       756 |               struct lock_class_key * key
       760 |               struct lock_class *[2] class_cache
       768 |               const char * name
       772 |               u8 wait_type_outer
       773 |               u8 wait_type_inner
       774 |               u8 lock_type
       776 |               int cpu
       780 |               unsigned long ip
       740 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       740 |             u8[16] __padding
       756 |             struct lockdep_map dep_map
       756 |               struct lock_class_key * key
       760 |               struct lock_class *[2] class_cache
       768 |               const char * name
       772 |               u8 wait_type_outer
       773 |               u8 wait_type_inner
       774 |               u8 lock_type
       776 |               int cpu
       780 |               unsigned long ip
       784 |       int sk_bind_phc
       788 |       struct pid * sk_peer_pid
       792 |       const struct cred * sk_peer_cred
       800 |       ktime_t sk_stamp
       808 |       seqlock_t sk_stamp_seq
       808 |         struct seqcount_spinlock seqcount
       808 |           struct seqcount seqcount
       808 |             unsigned int sequence
       812 |             struct lockdep_map dep_map
       812 |               struct lock_class_key * key
       816 |               struct lock_class *[2] class_cache
       824 |               const char * name
       828 |               u8 wait_type_outer
       829 |               u8 wait_type_inner
       830 |               u8 lock_type
       832 |               int cpu
       836 |               unsigned long ip
       840 |           spinlock_t * lock
       844 |         struct spinlock lock
       844 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |             struct raw_spinlock rlock
       844 |               arch_spinlock_t raw_lock
       844 |                 volatile unsigned int slock
       848 |               unsigned int magic
       852 |               unsigned int owner_cpu
       856 |               void * owner
       860 |               struct lockdep_map dep_map
       860 |                 struct lock_class_key * key
       864 |                 struct lock_class *[2] class_cache
       872 |                 const char * name
       876 |                 u8 wait_type_outer
       877 |                 u8 wait_type_inner
       878 |                 u8 lock_type
       880 |                 int cpu
       884 |                 unsigned long ip
       844 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |               u8[16] __padding
       860 |               struct lockdep_map dep_map
       860 |                 struct lock_class_key * key
       864 |                 struct lock_class *[2] class_cache
       872 |                 const char * name
       876 |                 u8 wait_type_outer
       877 |                 u8 wait_type_inner
       878 |                 u8 lock_type
       880 |                 int cpu
       884 |                 unsigned long ip
       888 |       int sk_disconnects
       892 |       u8 sk_txrehash
       893 |       u8 sk_clockid
   894:0-0 |       u8 sk_txtime_deadline_mode
   894:1-1 |       u8 sk_txtime_report_errors
   894:2-7 |       u8 sk_txtime_unused
       896 |       void * sk_user_data
       900 |       struct sock_cgroup_data sk_cgrp_data
       900 |       void (*)(struct sock *) sk_state_change
       904 |       void (*)(struct sock *) sk_write_space
       908 |       void (*)(struct sock *) sk_error_report
       912 |       int (*)(struct sock *, struct sk_buff *) sk_backlog_rcv
       916 |       void (*)(struct sock *) sk_destruct
       920 |       struct sock_reuseport * sk_reuseport_cb
       924 |       struct bpf_local_storage * sk_bpf_storage
       928 |       struct callback_head sk_rcu
       928 |         struct callback_head * next
       932 |         void (*)(struct callback_head *) func
       936 |       netns_tracker ns_tracker
       944 |     struct ipv6_pinfo * pinet6
       948 |     unsigned long inet_flags
       952 |     __be32 inet_saddr
       956 |     __s16 uc_ttl
       958 |     __be16 inet_sport
       960 |     struct ip_options_rcu * inet_opt
       964 |     atomic_t inet_id
       964 |       int counter
       968 |     __u8 tos
       969 |     __u8 min_ttl
       970 |     __u8 mc_ttl
       971 |     __u8 pmtudisc
       972 |     __u8 rcv_tos
       973 |     __u8 convert_csum
       976 |     int uc_index
       980 |     int mc_index
       984 |     __be32 mc_addr
       988 |     u32 local_port_range
       992 |     struct ip_mc_socklist * mc_list
      1000 |     struct inet_cork_full cork
      1000 |       struct inet_cork base
      1000 |         unsigned int flags
      1004 |         __be32 addr
      1008 |         struct ip_options * opt
      1012 |         unsigned int fragsize
      1016 |         int length
      1020 |         struct dst_entry * dst
      1024 |         u8 tx_flags
      1025 |         __u8 ttl
      1026 |         __s16 tos
      1028 |         char priority
      1030 |         __u16 gso_size
      1032 |         u64 transmit_time
      1040 |         u32 mark
      1048 |       struct flowi fl
      1048 |         union flowi::(unnamed at ../include/net/flow.h:155:2) u
      1048 |           struct flowi_common __fl_common
      1048 |             int flowic_oif
      1052 |             int flowic_iif
      1056 |             int flowic_l3mdev
      1060 |             __u32 flowic_mark
      1064 |             __u8 flowic_tos
      1065 |             __u8 flowic_scope
      1066 |             __u8 flowic_proto
      1067 |             __u8 flowic_flags
      1068 |             __u32 flowic_secid
      1072 |             kuid_t flowic_uid
      1072 |               uid_t val
      1076 |             __u32 flowic_multipath_hash
      1080 |             struct flowi_tunnel flowic_tun_key
      1080 |               __be64 tun_id
      1048 |           struct flowi4 ip4
      1048 |             struct flowi_common __fl_common
      1048 |               int flowic_oif
      1052 |               int flowic_iif
      1056 |               int flowic_l3mdev
      1060 |               __u32 flowic_mark
      1064 |               __u8 flowic_tos
      1065 |               __u8 flowic_scope
      1066 |               __u8 flowic_proto
      1067 |               __u8 flowic_flags
      1068 |               __u32 flowic_secid
      1072 |               kuid_t flowic_uid
      1072 |                 uid_t val
      1076 |               __u32 flowic_multipath_hash
      1080 |               struct flowi_tunnel flowic_tun_key
      1080 |                 __be64 tun_id
      1088 |             __be32 saddr
      1092 |             __be32 daddr
      1096 |             union flowi_uli uli
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1096 |                 __be16 dport
      1098 |                 __be16 sport
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1096 |                 __u8 type
      1097 |                 __u8 code
      1096 |               __be32 gre_key
      1096 |               struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1096 |                 __u8 type
      1048 |           struct flowi6 ip6
      1048 |             struct flowi_common __fl_common
      1048 |               int flowic_oif
      1052 |               int flowic_iif
      1056 |               int flowic_l3mdev
      1060 |               __u32 flowic_mark
      1064 |               __u8 flowic_tos
      1065 |               __u8 flowic_scope
      1066 |               __u8 flowic_proto
      1067 |               __u8 flowic_flags
      1068 |               __u32 flowic_secid
      1072 |               kuid_t flowic_uid
      1072 |                 uid_t val
      1076 |               __u32 flowic_multipath_hash
      1080 |               struct flowi_tunnel flowic_tun_key
      1080 |                 __be64 tun_id
      1088 |             struct in6_addr daddr
      1088 |               union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1088 |                 __u8[16] u6_addr8
      1088 |                 __be16[8] u6_addr16
      1088 |                 __be32[4] u6_addr32
      1104 |             struct in6_addr saddr
      1104 |               union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1104 |                 __u8[16] u6_addr8
      1104 |                 __be16[8] u6_addr16
      1104 |                 __be32[4] u6_addr32
      1120 |             __be32 flowlabel
      1124 |             union flowi_uli uli
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:48:2) ports
      1124 |                 __be16 dport
      1126 |                 __be16 sport
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:53:2) icmpt
      1124 |                 __u8 type
      1125 |                 __u8 code
      1124 |               __be32 gre_key
      1124 |               struct flowi_uli::(unnamed at ../include/net/flow.h:60:2) mht
      1124 |                 __u8 type
      1128 |             __u32 mp_hash
      1136 |   unsigned long udp_flags
      1140 |   int pending
      1144 |   __u8 encap_type
      1146 |   __u16 len
      1148 |   __u16 gso_size
      1150 |   __u16 pcslen
      1152 |   __u16 pcrlen
      1156 |   int (*)(struct sock *, struct sk_buff *) encap_rcv
      1160 |   void (*)(struct sock *, struct sk_buff *, int, __be16, u32, u8 *) encap_err_rcv
      1164 |   int (*)(struct sock *, struct sk_buff *) encap_err_lookup
      1168 |   void (*)(struct sock *) encap_destroy
      1172 |   struct sk_buff *(*)(struct sock *, struct list_head *, struct sk_buff *) gro_receive
      1176 |   int (*)(struct sock *, struct sk_buff *, int) gro_complete
      1180 |   struct sk_buff_head reader_queue
      1180 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
      1180 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
      1180 |         struct sk_buff * next
      1184 |         struct sk_buff * prev
      1180 |       struct sk_buff_list list
      1180 |         struct sk_buff * next
      1184 |         struct sk_buff * prev
      1188 |     __u32 qlen
      1192 |     struct spinlock lock
      1192 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1192 |         struct raw_spinlock rlock
      1192 |           arch_spinlock_t raw_lock
      1192 |             volatile unsigned int slock
      1196 |           unsigned int magic
      1200 |           unsigned int owner_cpu
      1204 |           void * owner
      1208 |           struct lockdep_map dep_map
      1208 |             struct lock_class_key * key
      1212 |             struct lock_class *[2] class_cache
      1220 |             const char * name
      1224 |             u8 wait_type_outer
      1225 |             u8 wait_type_inner
      1226 |             u8 lock_type
      1228 |             int cpu
      1232 |             unsigned long ip
      1192 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1192 |           u8[16] __padding
      1208 |           struct lockdep_map dep_map
      1208 |             struct lock_class_key * key
      1212 |             struct lock_class *[2] class_cache
      1220 |             const char * name
      1224 |             u8 wait_type_outer
      1225 |             u8 wait_type_inner
      1226 |             u8 lock_type
      1228 |             int cpu
      1232 |             unsigned long ip
      1236 |   int forward_deficit
      1240 |   int forward_threshold
      1244 |   bool peeking_with_offset
           | [sizeof=1248, align=8]

*** Dumping AST Record Layout
         0 | struct ipv6hdr::(unnamed at ../include/uapi/linux/ipv6.h:134:2)
         0 |   struct in6_addr saddr
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
        16 |   struct in6_addr daddr
        16 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |       __u8[16] u6_addr8
        16 |       __be16[8] u6_addr16
        16 |       __be32[4] u6_addr32
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | union ipv6hdr::(anonymous at ../include/uapi/linux/ipv6.h:134:2)
         0 |   struct ipv6hdr::(anonymous at ../include/uapi/linux/ipv6.h:134:2) 
         0 |     struct in6_addr saddr
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |     struct in6_addr daddr
        16 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |         __u8[16] u6_addr8
        16 |         __be16[8] u6_addr16
        16 |         __be32[4] u6_addr32
         0 |   struct ipv6hdr::(unnamed at ../include/uapi/linux/ipv6.h:134:2) addrs
         0 |     struct in6_addr saddr
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
        16 |     struct in6_addr daddr
        16 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        16 |         __u8[16] u6_addr8
        16 |         __be16[8] u6_addr16
        16 |         __be32[4] u6_addr32
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ipv6hdr
     0:0-3 |   __u8 priority
     0:4-7 |   __u8 version
         1 |   __u8[3] flow_lbl
         4 |   __be16 payload_len
         6 |   __u8 nexthdr
         7 |   __u8 hop_limit
         8 |   union ipv6hdr::(anonymous at ../include/uapi/linux/ipv6.h:134:2) 
         8 |     struct ipv6hdr::(anonymous at ../include/uapi/linux/ipv6.h:134:2) 
         8 |       struct in6_addr saddr
         8 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         8 |           __u8[16] u6_addr8
         8 |           __be16[8] u6_addr16
         8 |           __be32[4] u6_addr32
        24 |       struct in6_addr daddr
        24 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        24 |           __u8[16] u6_addr8
        24 |           __be16[8] u6_addr16
        24 |           __be32[4] u6_addr32
         8 |     struct ipv6hdr::(unnamed at ../include/uapi/linux/ipv6.h:134:2) addrs
         8 |       struct in6_addr saddr
         8 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         8 |           __u8[16] u6_addr8
         8 |           __be16[8] u6_addr16
         8 |           __be32[4] u6_addr32
        24 |       struct in6_addr daddr
        24 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        24 |           __u8[16] u6_addr8
        24 |           __be16[8] u6_addr16
        24 |           __be32[4] u6_addr32
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct tcp_request_sock
         0 |   struct inet_request_sock req
         0 |     struct request_sock req
         0 |       struct sock_common __req_common
         0 |         union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |           __addrpair skc_addrpair
         0 |           struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |             __be32 skc_daddr
         4 |             __be32 skc_rcv_saddr
         8 |         union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |           unsigned int skc_hash
         8 |           __u16[2] skc_u16hashes
        12 |         union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |           __portpair skc_portpair
        12 |           struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |             __be16 skc_dport
        14 |             __u16 skc_num
        16 |         unsigned short skc_family
        18 |         volatile unsigned char skc_state
    19:0-3 |         unsigned char skc_reuse
    19:4-4 |         unsigned char skc_reuseport
    19:5-5 |         unsigned char skc_ipv6only
    19:6-6 |         unsigned char skc_net_refcnt
        20 |         int skc_bound_dev_if
        24 |         union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |           struct hlist_node skc_bind_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        24 |           struct hlist_node skc_portaddr_node
        24 |             struct hlist_node * next
        28 |             struct hlist_node ** pprev
        32 |         struct proto * skc_prot
        36 |         possible_net_t skc_net
        36 |           struct net * net
        40 |         struct in6_addr skc_v6_daddr
        40 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |             __u8[16] u6_addr8
        40 |             __be16[8] u6_addr16
        40 |             __be32[4] u6_addr32
        56 |         struct in6_addr skc_v6_rcv_saddr
        56 |           union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |             __u8[16] u6_addr8
        56 |             __be16[8] u6_addr16
        56 |             __be32[4] u6_addr32
        72 |         atomic64_t skc_cookie
        72 |           s64 counter
        80 |         union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |           unsigned long skc_flags
        80 |           struct sock * skc_listener
        80 |           struct inet_timewait_death_row * skc_tw_dr
        84 |         int[0] skc_dontcopy_begin
        84 |         union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |           struct hlist_node skc_node
        84 |             struct hlist_node * next
        88 |             struct hlist_node ** pprev
        84 |           struct hlist_nulls_node skc_nulls_node
        84 |             struct hlist_nulls_node * next
        88 |             struct hlist_nulls_node ** pprev
        92 |         unsigned short skc_tx_queue_mapping
        94 |         unsigned short skc_rx_queue_mapping
        96 |         union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |           int skc_incoming_cpu
        96 |           u32 skc_rcv_wnd
        96 |           u32 skc_tw_rcv_nxt
       100 |         struct refcount_struct skc_refcnt
       100 |           atomic_t refs
       100 |             int counter
       104 |         int[0] skc_dontcopy_end
       104 |         union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |           u32 skc_rxhash
       104 |           u32 skc_window_clamp
       104 |           u32 skc_tw_snd_nxt
       112 |       struct request_sock * dl_next
       116 |       u16 mss
       118 |       u8 num_retrans
   119:0-0 |       u8 syncookie
   119:1-7 |       u8 num_timeout
       120 |       u32 ts_recent
       124 |       struct timer_list rsk_timer
       124 |         struct hlist_node entry
       124 |           struct hlist_node * next
       128 |           struct hlist_node ** pprev
       132 |         unsigned long expires
       136 |         void (*)(struct timer_list *) function
       140 |         u32 flags
       144 |         struct lockdep_map lockdep_map
       144 |           struct lock_class_key * key
       148 |           struct lock_class *[2] class_cache
       156 |           const char * name
       160 |           u8 wait_type_outer
       161 |           u8 wait_type_inner
       162 |           u8 lock_type
       164 |           int cpu
       168 |           unsigned long ip
       172 |       const struct request_sock_ops * rsk_ops
       176 |       struct sock * sk
       180 |       struct saved_syn * saved_syn
       184 |       u32 secid
       188 |       u32 peer_secid
       192 |       u32 timeout
   200:0-3 |     u16 snd_wscale
   200:4-7 |     u16 rcv_wscale
   201:0-0 |     u16 tstamp_ok
   201:1-1 |     u16 sack_ok
   201:2-2 |     u16 wscale_ok
   201:3-3 |     u16 ecn_ok
   201:4-4 |     u16 acked
   201:5-5 |     u16 no_srccheck
   201:6-6 |     u16 smc_ok
       204 |     u32 ir_mark
       208 |     union inet_request_sock::(anonymous at ../include/net/inet_sock.h:92:2) 
       208 |       struct ip_options_rcu * ireq_opt
       208 |       struct inet_request_sock::(anonymous at ../include/net/inet_sock.h:95:3) 
       208 |         struct ipv6_txoptions * ipv6_opt
       212 |         struct sk_buff * pktopts
       216 |   const struct tcp_request_sock_ops * af_specific
       224 |   u64 snt_synack
       232 |   bool tfo_listener
       233 |   bool is_mptcp
       234 |   bool req_usec_ts
       236 |   u32 txhash
       240 |   u32 rcv_isn
       244 |   u32 snt_isn
       248 |   u32 ts_off
       252 |   u32 last_oow_ack_time
       256 |   u32 rcv_nxt
       260 |   u8 syn_tos
           | [sizeof=264, align=8]

*** Dumping AST Record Layout
         0 | struct ipv6_pinfo::(unnamed at ../include/linux/ipv6.h:231:3)
     0:0-0 |   __u16 srcrt
     0:1-1 |   __u16 osrcrt
     0:2-2 |   __u16 rxinfo
     0:3-3 |   __u16 rxoinfo
     0:4-4 |   __u16 rxhlim
     0:5-5 |   __u16 rxohlim
     0:6-6 |   __u16 hopopts
     0:7-7 |   __u16 ohopopts
     1:0-0 |   __u16 dstopts
     1:1-1 |   __u16 odstopts
     1:2-2 |   __u16 rxflow
     1:3-3 |   __u16 rxtclass
     1:4-4 |   __u16 rxpmtu
     1:5-5 |   __u16 rxorigdstaddr
     1:6-6 |   __u16 recvfragsize
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | struct tcp_timewait_sock
         0 |   struct inet_timewait_sock tw_sk
         0 |     struct sock_common __tw_common
         0 |       union sock_common::(anonymous at ../include/net/sock.h:151:2) 
         0 |         __addrpair skc_addrpair
         0 |         struct sock_common::(anonymous at ../include/net/sock.h:153:3) 
         0 |           __be32 skc_daddr
         4 |           __be32 skc_rcv_saddr
         8 |       union sock_common::(anonymous at ../include/net/sock.h:158:2) 
         8 |         unsigned int skc_hash
         8 |         __u16[2] skc_u16hashes
        12 |       union sock_common::(anonymous at ../include/net/sock.h:163:2) 
        12 |         __portpair skc_portpair
        12 |         struct sock_common::(anonymous at ../include/net/sock.h:165:3) 
        12 |           __be16 skc_dport
        14 |           __u16 skc_num
        16 |       unsigned short skc_family
        18 |       volatile unsigned char skc_state
    19:0-3 |       unsigned char skc_reuse
    19:4-4 |       unsigned char skc_reuseport
    19:5-5 |       unsigned char skc_ipv6only
    19:6-6 |       unsigned char skc_net_refcnt
        20 |       int skc_bound_dev_if
        24 |       union sock_common::(anonymous at ../include/net/sock.h:178:2) 
        24 |         struct hlist_node skc_bind_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        24 |         struct hlist_node skc_portaddr_node
        24 |           struct hlist_node * next
        28 |           struct hlist_node ** pprev
        32 |       struct proto * skc_prot
        36 |       possible_net_t skc_net
        36 |         struct net * net
        40 |       struct in6_addr skc_v6_daddr
        40 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        40 |           __u8[16] u6_addr8
        40 |           __be16[8] u6_addr16
        40 |           __be32[4] u6_addr32
        56 |       struct in6_addr skc_v6_rcv_saddr
        56 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        56 |           __u8[16] u6_addr8
        56 |           __be16[8] u6_addr16
        56 |           __be32[4] u6_addr32
        72 |       atomic64_t skc_cookie
        72 |         s64 counter
        80 |       union sock_common::(anonymous at ../include/net/sock.h:197:2) 
        80 |         unsigned long skc_flags
        80 |         struct sock * skc_listener
        80 |         struct inet_timewait_death_row * skc_tw_dr
        84 |       int[0] skc_dontcopy_begin
        84 |       union sock_common::(anonymous at ../include/net/sock.h:209:2) 
        84 |         struct hlist_node skc_node
        84 |           struct hlist_node * next
        88 |           struct hlist_node ** pprev
        84 |         struct hlist_nulls_node skc_nulls_node
        84 |           struct hlist_nulls_node * next
        88 |           struct hlist_nulls_node ** pprev
        92 |       unsigned short skc_tx_queue_mapping
        94 |       unsigned short skc_rx_queue_mapping
        96 |       union sock_common::(anonymous at ../include/net/sock.h:217:2) 
        96 |         int skc_incoming_cpu
        96 |         u32 skc_rcv_wnd
        96 |         u32 skc_tw_rcv_nxt
       100 |       struct refcount_struct skc_refcnt
       100 |         atomic_t refs
       100 |           int counter
       104 |       int[0] skc_dontcopy_end
       104 |       union sock_common::(anonymous at ../include/net/sock.h:226:2) 
       104 |         u32 skc_rxhash
       104 |         u32 skc_window_clamp
       104 |         u32 skc_tw_snd_nxt
       112 |     __u32 tw_mark
       116 |     volatile unsigned char tw_substate
       117 |     unsigned char tw_rcv_wscale
       118 |     __be16 tw_sport
   120:0-0 |     unsigned int tw_transparent
  120:1-20 |     unsigned int tw_flowlabel
   122:5-5 |     unsigned int tw_usec_ts
   122:6-7 |     unsigned int tw_pad
   123:0-7 |     unsigned int tw_tos
       124 |     u32 tw_txhash
       128 |     u32 tw_priority
       132 |     struct timer_list tw_timer
       132 |       struct hlist_node entry
       132 |         struct hlist_node * next
       136 |         struct hlist_node ** pprev
       140 |       unsigned long expires
       144 |       void (*)(struct timer_list *) function
       148 |       u32 flags
       152 |       struct lockdep_map lockdep_map
       152 |         struct lock_class_key * key
       156 |         struct lock_class *[2] class_cache
       164 |         const char * name
       168 |         u8 wait_type_outer
       169 |         u8 wait_type_inner
       170 |         u8 lock_type
       172 |         int cpu
       176 |         unsigned long ip
       180 |     struct inet_bind_bucket * tw_tb
       184 |     struct inet_bind2_bucket * tw_tb2
       192 |   u32 tw_rcv_wnd
       196 |   u32 tw_ts_offset
       200 |   u32 tw_ts_recent
       204 |   u32 tw_last_oow_ack_time
       208 |   int tw_ts_recent_stamp
       212 |   u32 tw_tx_delay
           | [sizeof=216, align=8]

*** Dumping AST Record Layout
         0 | struct sockcm_cookie
         0 |   u64 transmit_time
         8 |   u32 mark
        12 |   u32 tsflags
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ipv6_txoptions
         0 |   struct refcount_struct refcnt
         0 |     atomic_t refs
         0 |       int counter
         4 |   int tot_len
         8 |   __u16 opt_flen
        10 |   __u16 opt_nflen
        12 |   struct ipv6_opt_hdr * hopopt
        16 |   struct ipv6_opt_hdr * dst0opt
        20 |   struct ipv6_rt_hdr * srcrt
        24 |   struct ipv6_opt_hdr * dst1opt
        28 |   struct callback_head rcu
        28 |     struct callback_head * next
        32 |     void (*)(struct callback_head *) func
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct static_key_false_deferred
         0 |   struct static_key_false key
         0 |     struct static_key key
         0 |       atomic_t enabled
         0 |         int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct hop_jumbo_hdr
         0 |   u8 nexthdr
         1 |   u8 hdrlen
         2 |   u8 tlv_type
         3 |   u8 tlv_len
         4 |   __be32 jumbo_payload_len
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct (unnamed at ../include/net/ipv6.h:758:9)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct inet6_cork
         0 |   struct ipv6_txoptions * opt
         4 |   u8 hop_limit
         5 |   u8 tclass
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct iphdr::(anonymous at ../include/uapi/linux/ip.h:104:2)
         0 |   __be32 saddr
         4 |   __be32 daddr
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct iphdr::(unnamed at ../include/uapi/linux/ip.h:104:2)
         0 |   __be32 saddr
         4 |   __be32 daddr
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union iphdr::(anonymous at ../include/uapi/linux/ip.h:104:2)
         0 |   struct iphdr::(anonymous at ../include/uapi/linux/ip.h:104:2) 
         0 |     __be32 saddr
         4 |     __be32 daddr
         0 |   struct iphdr::(unnamed at ../include/uapi/linux/ip.h:104:2) addrs
         0 |     __be32 saddr
         4 |     __be32 daddr
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct iphdr
     0:0-3 |   __u8 ihl
     0:4-7 |   __u8 version
         1 |   __u8 tos
         2 |   __be16 tot_len
         4 |   __be16 id
         6 |   __be16 frag_off
         8 |   __u8 ttl
         9 |   __u8 protocol
        10 |   __sum16 check
        12 |   union iphdr::(anonymous at ../include/uapi/linux/ip.h:104:2) 
        12 |     struct iphdr::(anonymous at ../include/uapi/linux/ip.h:104:2) 
        12 |       __be32 saddr
        16 |       __be32 daddr
        12 |     struct iphdr::(unnamed at ../include/uapi/linux/ip.h:104:2) addrs
        12 |       __be32 saddr
        16 |       __be32 daddr
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ipv4_addr_key
         0 |   __be32 addr
         4 |   int vif
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union inetpeer_addr::(anonymous at ../include/net/inetpeer.h:28:2)
         0 |   struct ipv4_addr_key a4
         0 |     __be32 addr
         4 |     int vif
         0 |   struct in6_addr a6
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
         0 |   u32[4] key
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct inet_peer::(anonymous at ../include/net/inetpeer.h:50:3)
         0 |   atomic_t rid
         0 |     int counter
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct inetpeer_addr
         0 |   union inetpeer_addr::(anonymous at ../include/net/inetpeer.h:28:2) 
         0 |     struct ipv4_addr_key a4
         0 |       __be32 addr
         4 |       int vif
         0 |     struct in6_addr a6
         0 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |         __u8[16] u6_addr8
         0 |         __be16[8] u6_addr16
         0 |         __be32[4] u6_addr32
         0 |     u32[4] key
        16 |   __u16 family
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct inet_peer_base
         0 |   struct rb_root rb_root
         0 |     struct rb_node * rb_node
         4 |   seqlock_t lock
         4 |     struct seqcount_spinlock seqcount
         4 |       struct seqcount seqcount
         4 |         unsigned int sequence
         8 |         struct lockdep_map dep_map
         8 |           struct lock_class_key * key
        12 |           struct lock_class *[2] class_cache
        20 |           const char * name
        24 |           u8 wait_type_outer
        25 |           u8 wait_type_inner
        26 |           u8 lock_type
        28 |           int cpu
        32 |           unsigned long ip
        36 |       spinlock_t * lock
        40 |     struct spinlock lock
        40 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        40 |         struct raw_spinlock rlock
        40 |           arch_spinlock_t raw_lock
        40 |             volatile unsigned int slock
        44 |           unsigned int magic
        48 |           unsigned int owner_cpu
        52 |           void * owner
        56 |           struct lockdep_map dep_map
        56 |             struct lock_class_key * key
        60 |             struct lock_class *[2] class_cache
        68 |             const char * name
        72 |             u8 wait_type_outer
        73 |             u8 wait_type_inner
        74 |             u8 lock_type
        76 |             int cpu
        80 |             unsigned long ip
        40 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        40 |           u8[16] __padding
        56 |           struct lockdep_map dep_map
        56 |             struct lock_class_key * key
        60 |             struct lock_class *[2] class_cache
        68 |             const char * name
        72 |             u8 wait_type_outer
        73 |             u8 wait_type_inner
        74 |             u8 lock_type
        76 |             int cpu
        80 |             unsigned long ip
        84 |   int total
           | [sizeof=88, align=4]

*** Dumping AST Record Layout
         0 | union fib_nh_common::(unnamed at ../include/net/ip_fib.h:91:2)
         0 |   __be32 ipv4
         0 |   struct in6_addr ipv6
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct fib_nh_common
         0 |   struct net_device * nhc_dev
         4 |   netdevice_tracker nhc_dev_tracker
         8 |   int nhc_oif
        12 |   unsigned char nhc_scope
        13 |   u8 nhc_family
        14 |   u8 nhc_gw_family
        15 |   unsigned char nhc_flags
        16 |   struct lwtunnel_state * nhc_lwtstate
        20 |   union fib_nh_common::(unnamed at ../include/net/ip_fib.h:91:2) nhc_gw
        20 |     __be32 ipv4
        20 |     struct in6_addr ipv6
        20 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        20 |         __u8[16] u6_addr8
        20 |         __be16[8] u6_addr16
        20 |         __be32[4] u6_addr32
        36 |   int nhc_weight
        40 |   atomic_t nhc_upper_bound
        40 |     int counter
        44 |   struct rtable ** nhc_pcpu_rth_output
        48 |   struct rtable * nhc_rth_input
        52 |   struct fnhe_hash_bucket * nhc_exceptions
           | [sizeof=56, align=4]

*** Dumping AST Record Layout
         0 | struct fib_nh
         0 |   struct fib_nh_common nh_common
         0 |     struct net_device * nhc_dev
         4 |     netdevice_tracker nhc_dev_tracker
         8 |     int nhc_oif
        12 |     unsigned char nhc_scope
        13 |     u8 nhc_family
        14 |     u8 nhc_gw_family
        15 |     unsigned char nhc_flags
        16 |     struct lwtunnel_state * nhc_lwtstate
        20 |     union fib_nh_common::(unnamed at ../include/net/ip_fib.h:91:2) nhc_gw
        20 |       __be32 ipv4
        20 |       struct in6_addr ipv6
        20 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
        20 |           __u8[16] u6_addr8
        20 |           __be16[8] u6_addr16
        20 |           __be32[4] u6_addr32
        36 |     int nhc_weight
        40 |     atomic_t nhc_upper_bound
        40 |       int counter
        44 |     struct rtable ** nhc_pcpu_rth_output
        48 |     struct rtable * nhc_rth_input
        52 |     struct fnhe_hash_bucket * nhc_exceptions
        56 |   struct hlist_node nh_hash
        56 |     struct hlist_node * next
        60 |     struct hlist_node ** pprev
        64 |   struct fib_info * nh_parent
        68 |   __u32 nh_tclassid
        72 |   __be32 nh_saddr
        76 |   int nh_saddr_genid
           | [sizeof=80, align=4]

*** Dumping AST Record Layout
         0 | struct fib_table
         0 |   struct hlist_node tb_hlist
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         8 |   u32 tb_id
        12 |   int tb_num_default
        16 |   struct callback_head rcu
        16 |     struct callback_head * next
        20 |     void (*)(struct callback_head *) func
        24 |   unsigned long * tb_data
        28 |   unsigned long[] __data
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct fib_result
         0 |   __be32 prefix
         4 |   unsigned char prefixlen
         5 |   unsigned char nh_sel
         6 |   unsigned char type
         7 |   unsigned char scope
         8 |   u32 tclassid
        12 |   dscp_t dscp
        16 |   struct fib_nh_common * nhc
        20 |   struct fib_info * fi
        24 |   struct fib_table * table
        28 |   struct hlist_head * fa_head
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct fib_info
         0 |   struct hlist_node fib_hash
         0 |     struct hlist_node * next
         4 |     struct hlist_node ** pprev
         8 |   struct hlist_node fib_lhash
         8 |     struct hlist_node * next
        12 |     struct hlist_node ** pprev
        16 |   struct list_head nh_list
        16 |     struct list_head * next
        20 |     struct list_head * prev
        24 |   struct net * fib_net
        28 |   struct refcount_struct fib_treeref
        28 |     atomic_t refs
        28 |       int counter
        32 |   struct refcount_struct fib_clntref
        32 |     atomic_t refs
        32 |       int counter
        36 |   unsigned int fib_flags
        40 |   unsigned char fib_dead
        41 |   unsigned char fib_protocol
        42 |   unsigned char fib_scope
        43 |   unsigned char fib_type
        44 |   __be32 fib_prefsrc
        48 |   u32 fib_tb_id
        52 |   u32 fib_priority
        56 |   struct dst_metrics * fib_metrics
        60 |   int fib_nhs
        64 |   bool fib_nh_is_v6
        65 |   bool nh_updated
        66 |   bool pfsrc_removed
        68 |   struct nexthop * nh
        72 |   struct callback_head rcu
        72 |     struct callback_head * next
        76 |     void (*)(struct callback_head *) func
        80 |   struct fib_nh[] fib_nh
           | [sizeof=80, align=4]

*** Dumping AST Record Layout
         0 | struct arphdr
         0 |   __be16 ar_hrd
         2 |   __be16 ar_pro
         4 |   unsigned char ar_hln
         5 |   unsigned char ar_pln
         6 |   __be16 ar_op
           | [sizeof=8, align=2]

*** Dumping AST Record Layout
         0 | struct inet6_skb_parm
         0 |   int iif
         4 |   __be16 ra
         6 |   __u16 dst0
         8 |   __u16 srcrt
        10 |   __u16 dst1
        12 |   __u16 lastopt
        14 |   __u16 nhoff
        16 |   __u16 flags
        18 |   __u16 dsthao
        20 |   __u16 frag_max_size
        22 |   __u16 srhoff
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct icmpv6_echo
         0 |   __be16 identifier
         2 |   __be16 sequence
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct icmpv6_nd_advt
     0:0-4 |   __u32 reserved
     0:5-5 |   __u32 override
     0:6-6 |   __u32 solicited
     0:7-7 |   __u32 router
    1:0-23 |   __u32 reserved2
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct icmpv6_nd_ra
         0 |   __u8 hop_limit
     1:0-2 |   __u8 reserved
     1:3-4 |   __u8 router_pref
     1:5-5 |   __u8 home_agent
     1:6-6 |   __u8 other
     1:7-7 |   __u8 managed
         2 |   __be16 rt_lifetime
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | union icmp6hdr::(unnamed at ../include/uapi/linux/icmpv6.h:15:2)
         0 |   __be32[1] un_data32
         0 |   __be16[2] un_data16
         0 |   __u8[4] un_data8
         0 |   struct icmpv6_echo u_echo
         0 |     __be16 identifier
         2 |     __be16 sequence
         0 |   struct icmpv6_nd_advt u_nd_advt
     0:0-4 |     __u32 reserved
     0:5-5 |     __u32 override
     0:6-6 |     __u32 solicited
     0:7-7 |     __u32 router
    1:0-23 |     __u32 reserved2
         0 |   struct icmpv6_nd_ra u_nd_ra
         0 |     __u8 hop_limit
     1:0-2 |     __u8 reserved
     1:3-4 |     __u8 router_pref
     1:5-5 |     __u8 home_agent
     1:6-6 |     __u8 other
     1:7-7 |     __u8 managed
         2 |     __be16 rt_lifetime
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct icmp6hdr
         0 |   __u8 icmp6_type
         1 |   __u8 icmp6_code
         2 |   __sum16 icmp6_cksum
         4 |   union icmp6hdr::(unnamed at ../include/uapi/linux/icmpv6.h:15:2) icmp6_dataun
         4 |     __be32[1] un_data32
         4 |     __be16[2] un_data16
         4 |     __u8[4] un_data8
         4 |     struct icmpv6_echo u_echo
         4 |       __be16 identifier
         6 |       __be16 sequence
         4 |     struct icmpv6_nd_advt u_nd_advt
     4:0-4 |       __u32 reserved
     4:5-5 |       __u32 override
     4:6-6 |       __u32 solicited
     4:7-7 |       __u32 router
    5:0-23 |       __u32 reserved2
         4 |     struct icmpv6_nd_ra u_nd_ra
         4 |       __u8 hop_limit
     5:0-2 |       __u8 reserved
     5:3-4 |       __u8 router_pref
     5:5-5 |       __u8 home_agent
     5:6-6 |       __u8 other
     5:7-7 |       __u8 managed
         6 |       __be16 rt_lifetime
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct nd_opt_hdr
         0 |   __u8 nd_opt_type
         1 |   __u8 nd_opt_len
           | [sizeof=2, align=1]

*** Dumping AST Record Layout
         0 | struct ndisc_options
         0 |   struct nd_opt_hdr *[15] nd_opt_array
        60 |   struct nd_opt_hdr * nd_useropts
        64 |   struct nd_opt_hdr * nd_useropts_end
           | [sizeof=68, align=4]

*** Dumping AST Record Layout
         0 | struct ipv6_stable_secret
         0 |   bool initialized
         4 |   struct in6_addr secret
         4 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         4 |       __u8[16] u6_addr8
         4 |       __be16[8] u6_addr16
         4 |       __be32[4] u6_addr32
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ipv6_devconf
         0 |   __u8[0] __cacheline_group_begin__ipv6_devconf_read_txrx
         0 |   __s32 disable_ipv6
         4 |   __s32 hop_limit
         8 |   __s32 mtu6
        12 |   __s32 forwarding
        16 |   __s32 disable_policy
        20 |   __s32 proxy_ndp
        24 |   __u8[0] __cacheline_group_end__ipv6_devconf_read_txrx
        24 |   __s32 accept_ra
        28 |   __s32 accept_redirects
        32 |   __s32 autoconf
        36 |   __s32 dad_transmits
        40 |   __s32 rtr_solicits
        44 |   __s32 rtr_solicit_interval
        48 |   __s32 rtr_solicit_max_interval
        52 |   __s32 rtr_solicit_delay
        56 |   __s32 force_mld_version
        60 |   __s32 mldv1_unsolicited_report_interval
        64 |   __s32 mldv2_unsolicited_report_interval
        68 |   __s32 use_tempaddr
        72 |   __s32 temp_valid_lft
        76 |   __s32 temp_prefered_lft
        80 |   __s32 regen_min_advance
        84 |   __s32 regen_max_retry
        88 |   __s32 max_desync_factor
        92 |   __s32 max_addresses
        96 |   __s32 accept_ra_defrtr
       100 |   __u32 ra_defrtr_metric
       104 |   __s32 accept_ra_min_hop_limit
       108 |   __s32 accept_ra_min_lft
       112 |   __s32 accept_ra_pinfo
       116 |   __s32 ignore_routes_with_linkdown
       120 |   __s32 accept_ra_rtr_pref
       124 |   __s32 rtr_probe_interval
       128 |   __s32 accept_source_route
       132 |   __s32 accept_ra_from_local
       136 |   __s32 optimistic_dad
       140 |   __s32 use_optimistic
       144 |   __s32 drop_unicast_in_l2_multicast
       148 |   __s32 accept_dad
       152 |   __s32 force_tllao
       156 |   __s32 ndisc_notify
       160 |   __s32 suppress_frag_ndisc
       164 |   __s32 accept_ra_mtu
       168 |   __s32 drop_unsolicited_na
       172 |   __s32 accept_untracked_na
       176 |   struct ipv6_stable_secret stable_secret
       176 |     bool initialized
       180 |     struct in6_addr secret
       180 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
       180 |         __u8[16] u6_addr8
       180 |         __be16[8] u6_addr16
       180 |         __be32[4] u6_addr32
       196 |   __s32 use_oif_addrs_only
       200 |   __s32 keep_addr_on_down
       204 |   __s32 seg6_enabled
       208 |   __s32 seg6_require_hmac
       212 |   __u32 enhanced_dad
       216 |   __u32 addr_gen_mode
       220 |   __s32 ndisc_tclass
       224 |   __s32 rpl_seg_enabled
       228 |   __u32 ioam6_id
       232 |   __u32 ioam6_id_wide
       236 |   __u8 ioam6_enabled
       237 |   __u8 ndisc_evict_nocarrier
       238 |   __u8 ra_honor_pio_life
       240 |   struct ctl_table_header * sysctl_header
           | [sizeof=244, align=4]

*** Dumping AST Record Layout
         0 | struct ipv6_devstat
         0 |   struct proc_dir_entry * proc_dir_entry
         4 |   typeof(struct ipstats_mib) * ipv6
         8 |   typeof(struct icmpv6_mib_device) * icmpv6dev
        12 |   typeof(struct icmpv6msg_mib_device) * icmpv6msgdev
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct inet6_dev
         0 |   struct net_device * dev
         4 |   netdevice_tracker dev_tracker
         8 |   struct list_head addr_list
         8 |     struct list_head * next
        12 |     struct list_head * prev
        16 |   struct ifmcaddr6 * mc_list
        20 |   struct ifmcaddr6 * mc_tomb
        24 |   unsigned char mc_qrv
        25 |   unsigned char mc_gq_running
        26 |   unsigned char mc_ifc_count
        27 |   unsigned char mc_dad_count
        28 |   unsigned long mc_v1_seen
        32 |   unsigned long mc_qi
        36 |   unsigned long mc_qri
        40 |   unsigned long mc_maxdelay
        44 |   struct delayed_work mc_gq_work
        44 |     struct work_struct work
        44 |       atomic_t data
        44 |         int counter
        48 |       struct list_head entry
        48 |         struct list_head * next
        52 |         struct list_head * prev
        56 |       work_func_t func
        60 |       struct lockdep_map lockdep_map
        60 |         struct lock_class_key * key
        64 |         struct lock_class *[2] class_cache
        72 |         const char * name
        76 |         u8 wait_type_outer
        77 |         u8 wait_type_inner
        78 |         u8 lock_type
        80 |         int cpu
        84 |         unsigned long ip
        88 |     struct timer_list timer
        88 |       struct hlist_node entry
        88 |         struct hlist_node * next
        92 |         struct hlist_node ** pprev
        96 |       unsigned long expires
       100 |       void (*)(struct timer_list *) function
       104 |       u32 flags
       108 |       struct lockdep_map lockdep_map
       108 |         struct lock_class_key * key
       112 |         struct lock_class *[2] class_cache
       120 |         const char * name
       124 |         u8 wait_type_outer
       125 |         u8 wait_type_inner
       126 |         u8 lock_type
       128 |         int cpu
       132 |         unsigned long ip
       136 |     struct workqueue_struct * wq
       140 |     int cpu
       144 |   struct delayed_work mc_ifc_work
       144 |     struct work_struct work
       144 |       atomic_t data
       144 |         int counter
       148 |       struct list_head entry
       148 |         struct list_head * next
       152 |         struct list_head * prev
       156 |       work_func_t func
       160 |       struct lockdep_map lockdep_map
       160 |         struct lock_class_key * key
       164 |         struct lock_class *[2] class_cache
       172 |         const char * name
       176 |         u8 wait_type_outer
       177 |         u8 wait_type_inner
       178 |         u8 lock_type
       180 |         int cpu
       184 |         unsigned long ip
       188 |     struct timer_list timer
       188 |       struct hlist_node entry
       188 |         struct hlist_node * next
       192 |         struct hlist_node ** pprev
       196 |       unsigned long expires
       200 |       void (*)(struct timer_list *) function
       204 |       u32 flags
       208 |       struct lockdep_map lockdep_map
       208 |         struct lock_class_key * key
       212 |         struct lock_class *[2] class_cache
       220 |         const char * name
       224 |         u8 wait_type_outer
       225 |         u8 wait_type_inner
       226 |         u8 lock_type
       228 |         int cpu
       232 |         unsigned long ip
       236 |     struct workqueue_struct * wq
       240 |     int cpu
       244 |   struct delayed_work mc_dad_work
       244 |     struct work_struct work
       244 |       atomic_t data
       244 |         int counter
       248 |       struct list_head entry
       248 |         struct list_head * next
       252 |         struct list_head * prev
       256 |       work_func_t func
       260 |       struct lockdep_map lockdep_map
       260 |         struct lock_class_key * key
       264 |         struct lock_class *[2] class_cache
       272 |         const char * name
       276 |         u8 wait_type_outer
       277 |         u8 wait_type_inner
       278 |         u8 lock_type
       280 |         int cpu
       284 |         unsigned long ip
       288 |     struct timer_list timer
       288 |       struct hlist_node entry
       288 |         struct hlist_node * next
       292 |         struct hlist_node ** pprev
       296 |       unsigned long expires
       300 |       void (*)(struct timer_list *) function
       304 |       u32 flags
       308 |       struct lockdep_map lockdep_map
       308 |         struct lock_class_key * key
       312 |         struct lock_class *[2] class_cache
       320 |         const char * name
       324 |         u8 wait_type_outer
       325 |         u8 wait_type_inner
       326 |         u8 lock_type
       328 |         int cpu
       332 |         unsigned long ip
       336 |     struct workqueue_struct * wq
       340 |     int cpu
       344 |   struct delayed_work mc_query_work
       344 |     struct work_struct work
       344 |       atomic_t data
       344 |         int counter
       348 |       struct list_head entry
       348 |         struct list_head * next
       352 |         struct list_head * prev
       356 |       work_func_t func
       360 |       struct lockdep_map lockdep_map
       360 |         struct lock_class_key * key
       364 |         struct lock_class *[2] class_cache
       372 |         const char * name
       376 |         u8 wait_type_outer
       377 |         u8 wait_type_inner
       378 |         u8 lock_type
       380 |         int cpu
       384 |         unsigned long ip
       388 |     struct timer_list timer
       388 |       struct hlist_node entry
       388 |         struct hlist_node * next
       392 |         struct hlist_node ** pprev
       396 |       unsigned long expires
       400 |       void (*)(struct timer_list *) function
       404 |       u32 flags
       408 |       struct lockdep_map lockdep_map
       408 |         struct lock_class_key * key
       412 |         struct lock_class *[2] class_cache
       420 |         const char * name
       424 |         u8 wait_type_outer
       425 |         u8 wait_type_inner
       426 |         u8 lock_type
       428 |         int cpu
       432 |         unsigned long ip
       436 |     struct workqueue_struct * wq
       440 |     int cpu
       444 |   struct delayed_work mc_report_work
       444 |     struct work_struct work
       444 |       atomic_t data
       444 |         int counter
       448 |       struct list_head entry
       448 |         struct list_head * next
       452 |         struct list_head * prev
       456 |       work_func_t func
       460 |       struct lockdep_map lockdep_map
       460 |         struct lock_class_key * key
       464 |         struct lock_class *[2] class_cache
       472 |         const char * name
       476 |         u8 wait_type_outer
       477 |         u8 wait_type_inner
       478 |         u8 lock_type
       480 |         int cpu
       484 |         unsigned long ip
       488 |     struct timer_list timer
       488 |       struct hlist_node entry
       488 |         struct hlist_node * next
       492 |         struct hlist_node ** pprev
       496 |       unsigned long expires
       500 |       void (*)(struct timer_list *) function
       504 |       u32 flags
       508 |       struct lockdep_map lockdep_map
       508 |         struct lock_class_key * key
       512 |         struct lock_class *[2] class_cache
       520 |         const char * name
       524 |         u8 wait_type_outer
       525 |         u8 wait_type_inner
       526 |         u8 lock_type
       528 |         int cpu
       532 |         unsigned long ip
       536 |     struct workqueue_struct * wq
       540 |     int cpu
       544 |   struct sk_buff_head mc_query_queue
       544 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       544 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       544 |         struct sk_buff * next
       548 |         struct sk_buff * prev
       544 |       struct sk_buff_list list
       544 |         struct sk_buff * next
       548 |         struct sk_buff * prev
       552 |     __u32 qlen
       556 |     struct spinlock lock
       556 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       556 |         struct raw_spinlock rlock
       556 |           arch_spinlock_t raw_lock
       556 |             volatile unsigned int slock
       560 |           unsigned int magic
       564 |           unsigned int owner_cpu
       568 |           void * owner
       572 |           struct lockdep_map dep_map
       572 |             struct lock_class_key * key
       576 |             struct lock_class *[2] class_cache
       584 |             const char * name
       588 |             u8 wait_type_outer
       589 |             u8 wait_type_inner
       590 |             u8 lock_type
       592 |             int cpu
       596 |             unsigned long ip
       556 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       556 |           u8[16] __padding
       572 |           struct lockdep_map dep_map
       572 |             struct lock_class_key * key
       576 |             struct lock_class *[2] class_cache
       584 |             const char * name
       588 |             u8 wait_type_outer
       589 |             u8 wait_type_inner
       590 |             u8 lock_type
       592 |             int cpu
       596 |             unsigned long ip
       600 |   struct sk_buff_head mc_report_queue
       600 |     union sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       600 |       struct sk_buff_head::(anonymous at ../include/linux/skbuff.h:339:2) 
       600 |         struct sk_buff * next
       604 |         struct sk_buff * prev
       600 |       struct sk_buff_list list
       600 |         struct sk_buff * next
       604 |         struct sk_buff * prev
       608 |     __u32 qlen
       612 |     struct spinlock lock
       612 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       612 |         struct raw_spinlock rlock
       612 |           arch_spinlock_t raw_lock
       612 |             volatile unsigned int slock
       616 |           unsigned int magic
       620 |           unsigned int owner_cpu
       624 |           void * owner
       628 |           struct lockdep_map dep_map
       628 |             struct lock_class_key * key
       632 |             struct lock_class *[2] class_cache
       640 |             const char * name
       644 |             u8 wait_type_outer
       645 |             u8 wait_type_inner
       646 |             u8 lock_type
       648 |             int cpu
       652 |             unsigned long ip
       612 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       612 |           u8[16] __padding
       628 |           struct lockdep_map dep_map
       628 |             struct lock_class_key * key
       632 |             struct lock_class *[2] class_cache
       640 |             const char * name
       644 |             u8 wait_type_outer
       645 |             u8 wait_type_inner
       646 |             u8 lock_type
       648 |             int cpu
       652 |             unsigned long ip
       656 |   struct spinlock mc_query_lock
       656 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       656 |       struct raw_spinlock rlock
       656 |         arch_spinlock_t raw_lock
       656 |           volatile unsigned int slock
       660 |         unsigned int magic
       664 |         unsigned int owner_cpu
       668 |         void * owner
       672 |         struct lockdep_map dep_map
       672 |           struct lock_class_key * key
       676 |           struct lock_class *[2] class_cache
       684 |           const char * name
       688 |           u8 wait_type_outer
       689 |           u8 wait_type_inner
       690 |           u8 lock_type
       692 |           int cpu
       696 |           unsigned long ip
       656 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       656 |         u8[16] __padding
       672 |         struct lockdep_map dep_map
       672 |           struct lock_class_key * key
       676 |           struct lock_class *[2] class_cache
       684 |           const char * name
       688 |           u8 wait_type_outer
       689 |           u8 wait_type_inner
       690 |           u8 lock_type
       692 |           int cpu
       696 |           unsigned long ip
       700 |   struct spinlock mc_report_lock
       700 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       700 |       struct raw_spinlock rlock
       700 |         arch_spinlock_t raw_lock
       700 |           volatile unsigned int slock
       704 |         unsigned int magic
       708 |         unsigned int owner_cpu
       712 |         void * owner
       716 |         struct lockdep_map dep_map
       716 |           struct lock_class_key * key
       720 |           struct lock_class *[2] class_cache
       728 |           const char * name
       732 |           u8 wait_type_outer
       733 |           u8 wait_type_inner
       734 |           u8 lock_type
       736 |           int cpu
       740 |           unsigned long ip
       700 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       700 |         u8[16] __padding
       716 |         struct lockdep_map dep_map
       716 |           struct lock_class_key * key
       720 |           struct lock_class *[2] class_cache
       728 |           const char * name
       732 |           u8 wait_type_outer
       733 |           u8 wait_type_inner
       734 |           u8 lock_type
       736 |           int cpu
       740 |           unsigned long ip
       744 |   struct mutex mc_lock
       744 |     atomic_t owner
       744 |       int counter
       748 |     struct raw_spinlock wait_lock
       748 |       arch_spinlock_t raw_lock
       748 |         volatile unsigned int slock
       752 |       unsigned int magic
       756 |       unsigned int owner_cpu
       760 |       void * owner
       764 |       struct lockdep_map dep_map
       764 |         struct lock_class_key * key
       768 |         struct lock_class *[2] class_cache
       776 |         const char * name
       780 |         u8 wait_type_outer
       781 |         u8 wait_type_inner
       782 |         u8 lock_type
       784 |         int cpu
       788 |         unsigned long ip
       792 |     struct list_head wait_list
       792 |       struct list_head * next
       796 |       struct list_head * prev
       800 |     void * magic
       804 |     struct lockdep_map dep_map
       804 |       struct lock_class_key * key
       808 |       struct lock_class *[2] class_cache
       816 |       const char * name
       820 |       u8 wait_type_outer
       821 |       u8 wait_type_inner
       822 |       u8 lock_type
       824 |       int cpu
       828 |       unsigned long ip
       832 |   struct ifacaddr6 * ac_list
       836 |   rwlock_t lock
       836 |     arch_rwlock_t raw_lock
       836 |     unsigned int magic
       840 |     unsigned int owner_cpu
       844 |     void * owner
       848 |     struct lockdep_map dep_map
       848 |       struct lock_class_key * key
       852 |       struct lock_class *[2] class_cache
       860 |       const char * name
       864 |       u8 wait_type_outer
       865 |       u8 wait_type_inner
       866 |       u8 lock_type
       868 |       int cpu
       872 |       unsigned long ip
       876 |   struct refcount_struct refcnt
       876 |     atomic_t refs
       876 |       int counter
       880 |   __u32 if_flags
       884 |   int dead
       888 |   u32 desync_factor
       892 |   struct list_head tempaddr_list
       892 |     struct list_head * next
       896 |     struct list_head * prev
       900 |   struct in6_addr token
       900 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
       900 |       __u8[16] u6_addr8
       900 |       __be16[8] u6_addr16
       900 |       __be32[4] u6_addr32
       916 |   struct neigh_parms * nd_parms
       920 |   struct ipv6_devconf cnf
       920 |     __u8[0] __cacheline_group_begin__ipv6_devconf_read_txrx
       920 |     __s32 disable_ipv6
       924 |     __s32 hop_limit
       928 |     __s32 mtu6
       932 |     __s32 forwarding
       936 |     __s32 disable_policy
       940 |     __s32 proxy_ndp
       944 |     __u8[0] __cacheline_group_end__ipv6_devconf_read_txrx
       944 |     __s32 accept_ra
       948 |     __s32 accept_redirects
       952 |     __s32 autoconf
       956 |     __s32 dad_transmits
       960 |     __s32 rtr_solicits
       964 |     __s32 rtr_solicit_interval
       968 |     __s32 rtr_solicit_max_interval
       972 |     __s32 rtr_solicit_delay
       976 |     __s32 force_mld_version
       980 |     __s32 mldv1_unsolicited_report_interval
       984 |     __s32 mldv2_unsolicited_report_interval
       988 |     __s32 use_tempaddr
       992 |     __s32 temp_valid_lft
       996 |     __s32 temp_prefered_lft
      1000 |     __s32 regen_min_advance
      1004 |     __s32 regen_max_retry
      1008 |     __s32 max_desync_factor
      1012 |     __s32 max_addresses
      1016 |     __s32 accept_ra_defrtr
      1020 |     __u32 ra_defrtr_metric
      1024 |     __s32 accept_ra_min_hop_limit
      1028 |     __s32 accept_ra_min_lft
      1032 |     __s32 accept_ra_pinfo
      1036 |     __s32 ignore_routes_with_linkdown
      1040 |     __s32 accept_ra_rtr_pref
      1044 |     __s32 rtr_probe_interval
      1048 |     __s32 accept_source_route
      1052 |     __s32 accept_ra_from_local
      1056 |     __s32 optimistic_dad
      1060 |     __s32 use_optimistic
      1064 |     __s32 drop_unicast_in_l2_multicast
      1068 |     __s32 accept_dad
      1072 |     __s32 force_tllao
      1076 |     __s32 ndisc_notify
      1080 |     __s32 suppress_frag_ndisc
      1084 |     __s32 accept_ra_mtu
      1088 |     __s32 drop_unsolicited_na
      1092 |     __s32 accept_untracked_na
      1096 |     struct ipv6_stable_secret stable_secret
      1096 |       bool initialized
      1100 |       struct in6_addr secret
      1100 |         union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
      1100 |           __u8[16] u6_addr8
      1100 |           __be16[8] u6_addr16
      1100 |           __be32[4] u6_addr32
      1116 |     __s32 use_oif_addrs_only
      1120 |     __s32 keep_addr_on_down
      1124 |     __s32 seg6_enabled
      1128 |     __s32 seg6_require_hmac
      1132 |     __u32 enhanced_dad
      1136 |     __u32 addr_gen_mode
      1140 |     __s32 ndisc_tclass
      1144 |     __s32 rpl_seg_enabled
      1148 |     __u32 ioam6_id
      1152 |     __u32 ioam6_id_wide
      1156 |     __u8 ioam6_enabled
      1157 |     __u8 ndisc_evict_nocarrier
      1158 |     __u8 ra_honor_pio_life
      1160 |     struct ctl_table_header * sysctl_header
      1164 |   struct ipv6_devstat stats
      1164 |     struct proc_dir_entry * proc_dir_entry
      1168 |     typeof(struct ipstats_mib) * ipv6
      1172 |     typeof(struct icmpv6_mib_device) * icmpv6dev
      1176 |     typeof(struct icmpv6msg_mib_device) * icmpv6msgdev
      1180 |   struct timer_list rs_timer
      1180 |     struct hlist_node entry
      1180 |       struct hlist_node * next
      1184 |       struct hlist_node ** pprev
      1188 |     unsigned long expires
      1192 |     void (*)(struct timer_list *) function
      1196 |     u32 flags
      1200 |     struct lockdep_map lockdep_map
      1200 |       struct lock_class_key * key
      1204 |       struct lock_class *[2] class_cache
      1212 |       const char * name
      1216 |       u8 wait_type_outer
      1217 |       u8 wait_type_inner
      1218 |       u8 lock_type
      1220 |       int cpu
      1224 |       unsigned long ip
      1228 |   __s32 rs_interval
      1232 |   __u8 rs_probes
      1236 |   unsigned long tstamp
      1240 |   struct callback_head rcu
      1240 |     struct callback_head * next
      1244 |     void (*)(struct callback_head *) func
      1248 |   unsigned int ra_mtu
           | [sizeof=1252, align=4]

*** Dumping AST Record Layout
         0 | union rtable::(anonymous at ../include/net/route.h:68:2)
         0 |   __be32 rt_gw4
         0 |   struct in6_addr rt_gw6
         0 |     union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
         0 |       __u8[16] u6_addr8
         0 |       __be16[8] u6_addr16
         0 |       __be32[4] u6_addr32
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct rtable
         0 |   struct dst_entry dst
         0 |     struct net_device * dev
         4 |     struct dst_ops * ops
         8 |     unsigned long _metrics
        12 |     unsigned long expires
        16 |     struct xfrm_state * xfrm
        20 |     int (*)(struct sk_buff *) input
        24 |     int (*)(struct net *, struct sock *, struct sk_buff *) output
        28 |     unsigned short flags
        30 |     short obsolete
        32 |     unsigned short header_len
        34 |     unsigned short trailer_len
        36 |     int __use
        40 |     unsigned long lastuse
        44 |     struct callback_head callback_head
        44 |       struct callback_head * next
        48 |       void (*)(struct callback_head *) func
        52 |     short error
        54 |     short __pad
        56 |     __u32 tclassid
        60 |     struct lwtunnel_state * lwtstate
        64 |     rcuref_t __rcuref
        64 |       atomic_t refcnt
        64 |         int counter
        68 |     netdevice_tracker dev_tracker
        72 |     struct list_head rt_uncached
        72 |       struct list_head * next
        76 |       struct list_head * prev
        80 |     struct uncached_list * rt_uncached_list
        84 |   int rt_genid
        88 |   unsigned int rt_flags
        92 |   __u16 rt_type
        94 |   __u8 rt_is_input
        95 |   __u8 rt_uses_gateway
        96 |   int rt_iif
       100 |   u8 rt_gw_family
       104 |   union rtable::(anonymous at ../include/net/route.h:68:2) 
       104 |     __be32 rt_gw4
       104 |     struct in6_addr rt_gw6
       104 |       union in6_addr::(unnamed at ../include/uapi/linux/in6.h:34:2) in6_u
       104 |         __u8[16] u6_addr8
       104 |         __be16[8] u6_addr16
       104 |         __be32[4] u6_addr32
   120:0-0 |   u32 rt_mtu_locked
  120:1-31 |   u32 rt_pmtu
           | [sizeof=124, align=4]

*** Dumping AST Record Layout
         0 | struct lwtunnel_state
         0 |   __u16 type
         2 |   __u16 flags
         4 |   __u16 headroom
         8 |   atomic_t refcnt
         8 |     int counter
        12 |   int (*)(struct net *, struct sock *, struct sk_buff *) orig_output
        16 |   int (*)(struct sk_buff *) orig_input
        20 |   struct callback_head rcu
        20 |     struct callback_head * next
        24 |     void (*)(struct callback_head *) func
        28 |   __u8[] data
           | [sizeof=28, align=4]

*** Dumping AST Record Layout
         0 | struct ipcm_cookie
         0 |   struct sockcm_cookie sockc
         0 |     u64 transmit_time
         8 |     u32 mark
        12 |     u32 tsflags
        16 |   __be32 addr
        20 |   int oif
        24 |   struct ip_options_rcu * opt
        28 |   __u8 protocol
        29 |   __u8 ttl
        30 |   __s16 tos
        32 |   char priority
        34 |   __u16 gso_size
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct kvec
         0 |   void * iov_base
         4 |   size_t iov_len
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct interval_tree_node
         0 |   struct rb_node rb
         0 |     unsigned long __rb_parent_color
         4 |     struct rb_node * rb_right
         8 |     struct rb_node * rb_left
        12 |   unsigned long start
        16 |   unsigned long last
        20 |   unsigned long __subtree_last
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct dim_sample
         0 |   ktime_t time
         8 |   u32 pkt_ctr
        12 |   u32 byte_ctr
        16 |   u16 event_ctr
        20 |   u32 comp_ctr
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_query_device_resp
         0 |   __u64 fw_ver
         8 |   __be64 node_guid
        16 |   __be64 sys_image_guid
        24 |   __u64 max_mr_size
        32 |   __u64 page_size_cap
        40 |   __u32 vendor_id
        44 |   __u32 vendor_part_id
        48 |   __u32 hw_ver
        52 |   __u32 max_qp
        56 |   __u32 max_qp_wr
        60 |   __u32 device_cap_flags
        64 |   __u32 max_sge
        68 |   __u32 max_sge_rd
        72 |   __u32 max_cq
        76 |   __u32 max_cqe
        80 |   __u32 max_mr
        84 |   __u32 max_pd
        88 |   __u32 max_qp_rd_atom
        92 |   __u32 max_ee_rd_atom
        96 |   __u32 max_res_rd_atom
       100 |   __u32 max_qp_init_rd_atom
       104 |   __u32 max_ee_init_rd_atom
       108 |   __u32 atomic_cap
       112 |   __u32 max_ee
       116 |   __u32 max_rdd
       120 |   __u32 max_mw
       124 |   __u32 max_raw_ipv6_qp
       128 |   __u32 max_raw_ethy_qp
       132 |   __u32 max_mcast_grp
       136 |   __u32 max_mcast_qp_attach
       140 |   __u32 max_total_mcast_qp_attach
       144 |   __u32 max_ah
       148 |   __u32 max_fmr
       152 |   __u32 max_map_per_fmr
       156 |   __u32 max_srq
       160 |   __u32 max_srq_wr
       164 |   __u32 max_srq_sge
       168 |   __u16 max_pkeys
       170 |   __u8 local_ca_ack_delay
       171 |   __u8 phys_port_cnt
       172 |   __u8[4] reserved
           | [sizeof=176, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_create_cq_resp
         0 |   __u32 cq_handle
         4 |   __u32 cqe
         8 |   __u64[0] driver_data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union ib_uverbs_wc::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:491:2)
         0 |   __be32 imm_data
         0 |   __u32 invalidate_rkey
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_wc
         0 |   __u64 wr_id
         8 |   __u32 status
        12 |   __u32 opcode
        16 |   __u32 vendor_err
        20 |   __u32 byte_len
        24 |   union ib_uverbs_wc::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:491:2) ex
        24 |     __be32 imm_data
        24 |     __u32 invalidate_rkey
        28 |   __u32 qp_num
        32 |   __u32 src_qp
        36 |   __u32 wc_flags
        40 |   __u16 pkey_index
        42 |   __u16 slid
        44 |   __u8 sl
        45 |   __u8 dlid_path_bits
        46 |   __u8 port_num
        47 |   __u8 reserved
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_global_route
         0 |   __u8[16] dgid
        16 |   __u32 flow_label
        20 |   __u8 sgid_index
        21 |   __u8 hop_limit
        22 |   __u8 traffic_class
        23 |   __u8 reserved
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_create_qp_resp
         0 |   __u32 qp_handle
         4 |   __u32 qpn
         8 |   __u32 max_send_wr
        12 |   __u32 max_recv_wr
        16 |   __u32 max_send_sge
        20 |   __u32 max_recv_sge
        24 |   __u32 max_inline_data
        28 |   __u32 reserved
        32 |   __u32[0] driver_data
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_qp_dest
         0 |   __u8[16] dgid
        16 |   __u32 flow_label
        20 |   __u16 dlid
        22 |   __u16 reserved
        24 |   __u8 sgid_index
        25 |   __u8 hop_limit
        26 |   __u8 traffic_class
        27 |   __u8 sl
        28 |   __u8 src_path_bits
        29 |   __u8 static_rate
        30 |   __u8 is_global
        31 |   __u8 port_num
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_modify_qp
         0 |   struct ib_uverbs_qp_dest dest
         0 |     __u8[16] dgid
        16 |     __u32 flow_label
        20 |     __u16 dlid
        22 |     __u16 reserved
        24 |     __u8 sgid_index
        25 |     __u8 hop_limit
        26 |     __u8 traffic_class
        27 |     __u8 sl
        28 |     __u8 src_path_bits
        29 |     __u8 static_rate
        30 |     __u8 is_global
        31 |     __u8 port_num
        32 |   struct ib_uverbs_qp_dest alt_dest
        32 |     __u8[16] dgid
        48 |     __u32 flow_label
        52 |     __u16 dlid
        54 |     __u16 reserved
        56 |     __u8 sgid_index
        57 |     __u8 hop_limit
        58 |     __u8 traffic_class
        59 |     __u8 sl
        60 |     __u8 src_path_bits
        61 |     __u8 static_rate
        62 |     __u8 is_global
        63 |     __u8 port_num
        64 |   __u32 qp_handle
        68 |   __u32 attr_mask
        72 |   __u32 qkey
        76 |   __u32 rq_psn
        80 |   __u32 sq_psn
        84 |   __u32 dest_qp_num
        88 |   __u32 qp_access_flags
        92 |   __u16 pkey_index
        94 |   __u16 alt_pkey_index
        96 |   __u8 qp_state
        97 |   __u8 cur_qp_state
        98 |   __u8 path_mtu
        99 |   __u8 path_mig_state
       100 |   __u8 en_sqd_async_notify
       101 |   __u8 max_rd_atomic
       102 |   __u8 max_dest_rd_atomic
       103 |   __u8 min_rnr_timer
       104 |   __u8 port_num
       105 |   __u8 timeout
       106 |   __u8 retry_cnt
       107 |   __u8 rnr_retry
       108 |   __u8 alt_port_num
       109 |   __u8 alt_timeout
       110 |   __u8[2] reserved
       112 |   __u64[0] driver_data
           | [sizeof=112, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:816:3)
         0 |   __u64 remote_addr
         8 |   __u32 rkey
        12 |   __u32 reserved
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | union ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:811:2)
         0 |   __be32 imm_data
         0 |   __u32 invalidate_rkey
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:821:3)
         0 |   __u64 remote_addr
         8 |   __u64 compare_add
        16 |   __u64 swap
        24 |   __u32 rkey
        28 |   __u32 reserved
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:828:3)
         0 |   __u32 ah
         4 |   __u32 remote_qpn
         8 |   __u32 remote_qkey
        12 |   __u32 reserved
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:815:2)
         0 |   struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:816:3) rdma
         0 |     __u64 remote_addr
         8 |     __u32 rkey
        12 |     __u32 reserved
         0 |   struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:821:3) atomic
         0 |     __u64 remote_addr
         8 |     __u64 compare_add
        16 |     __u64 swap
        24 |     __u32 rkey
        28 |     __u32 reserved
         0 |   struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:828:3) ud
         0 |     __u32 ah
         4 |     __u32 remote_qpn
         8 |     __u32 remote_qkey
        12 |     __u32 reserved
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_send_wr
         0 |   __u64 wr_id
         8 |   __u32 num_sge
        12 |   __u32 opcode
        16 |   __u32 send_flags
        20 |   union ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:811:2) ex
        20 |     __be32 imm_data
        20 |     __u32 invalidate_rkey
        24 |   union ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:815:2) wr
        24 |     struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:816:3) rdma
        24 |       __u64 remote_addr
        32 |       __u32 rkey
        36 |       __u32 reserved
        24 |     struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:821:3) atomic
        24 |       __u64 remote_addr
        32 |       __u64 compare_add
        40 |       __u64 swap
        48 |       __u32 rkey
        52 |       __u32 reserved
        24 |     struct ib_uverbs_send_wr::(unnamed at ../include/uapi/rdma/ib_user_verbs.h:828:3) ud
        24 |       __u32 ah
        28 |       __u32 remote_qpn
        32 |       __u32 remote_qkey
        36 |       __u32 reserved
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_recv_wr
         0 |   __u64 wr_id
         8 |   __u32 num_sge
        12 |   __u32 reserved
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_hdr
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
         8 |   __u64[0] flow_spec_data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:934:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:932:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:934:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:956:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:954:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:956:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:974:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:972:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:974:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:997:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:995:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:997:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1010:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1008:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1010:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1023:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1021:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1023:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1034:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1032:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1034:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1047:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1045:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1047:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_tunnel::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1064:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_tunnel::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1062:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_tunnel::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1064:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1082:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1080:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1082:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_gre::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1109:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_gre::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1107:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_gre::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1109:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_mpls::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1132:3)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec_mpls::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1130:2)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec_mpls::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1132:3) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct rdma_nl_cbs
         0 |   int (*)(struct sk_buff *, struct nlmsghdr *, struct netlink_ext_ack *) doit
         4 |   int (*)(struct sk_buff *, struct netlink_callback *) dump
         8 |   u8 flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct auto_mode_param
         0 |   int qp_type
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct rdma_counter_mode
         0 |   enum rdma_nl_counter_mode mode
         4 |   enum rdma_nl_counter_mask mask
         8 |   struct auto_mode_param param
         8 |     int qp_type
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct rdma_restrack_entry
         0 |   bool valid
     1:0-0 |   u8 no_track
         4 |   struct kref kref
         4 |     struct refcount_struct refcount
         4 |       atomic_t refs
         4 |         int counter
         8 |   struct completion comp
         8 |     unsigned int done
        12 |     struct swait_queue_head wait
        12 |       struct raw_spinlock lock
        12 |         arch_spinlock_t raw_lock
        12 |           volatile unsigned int slock
        16 |         unsigned int magic
        20 |         unsigned int owner_cpu
        24 |         void * owner
        28 |         struct lockdep_map dep_map
        28 |           struct lock_class_key * key
        32 |           struct lock_class *[2] class_cache
        40 |           const char * name
        44 |           u8 wait_type_outer
        45 |           u8 wait_type_inner
        46 |           u8 lock_type
        48 |           int cpu
        52 |           unsigned long ip
        56 |       struct list_head task_list
        56 |         struct list_head * next
        60 |         struct list_head * prev
        64 |   struct task_struct * task
        68 |   const char * kern_name
        72 |   enum rdma_restrack_type type
        76 |   bool user
        80 |   u32 id
           | [sizeof=84, align=4]

*** Dumping AST Record Layout
         0 | struct ib_t10_dif_domain
         0 |   enum ib_t10_dif_bg_type bg_type
         4 |   u16 pi_interval
         6 |   u16 bg
         8 |   u16 app_tag
        12 |   u32 ref_tag
        16 |   bool ref_remap
        17 |   bool app_escape
        18 |   bool ref_escape
        20 |   u16 apptag_check_mask
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct ib_user_mad_hdr
         0 |   __u32 id
         4 |   __u32 status
         8 |   __u32 timeout_ms
        12 |   __u32 retries
        16 |   __u32 length
        20 |   __be32 qpn
        24 |   __be32 qkey
        28 |   __be16 lid
        30 |   __u8 sl
        31 |   __u8 path_bits
        32 |   __u8 grh_present
        33 |   __u8 gid_index
        34 |   __u8 hop_limit
        35 |   __u8 traffic_class
        36 |   __u8[16] gid
        52 |   __be32 flow_label
        56 |   __u16 pkey_index
        58 |   __u8[6] reserved
           | [sizeof=64, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_attr::(unnamed at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:59:3)
         0 |   __u8 elem_id
         1 |   __u8 reserved
           | [sizeof=2, align=1]

*** Dumping AST Record Layout
         0 | union ib_uverbs_attr::(unnamed at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:58:2)
         0 |   struct ib_uverbs_attr::(unnamed at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:59:3) enum_data
         0 |     __u8 elem_id
         1 |     __u8 reserved
         0 |   __u16 reserved
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | union ib_uverbs_attr::(anonymous at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:65:2)
         0 |   __u64 data
         0 |   __s64 data_s64
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_attr
         0 |   __u16 attr_id
         2 |   __u16 len
         4 |   __u16 flags
         6 |   union ib_uverbs_attr::(unnamed at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:58:2) attr_data
         6 |     struct ib_uverbs_attr::(unnamed at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:59:3) enum_data
         6 |       __u8 elem_id
         7 |       __u8 reserved
         6 |     __u16 reserved
         8 |   union ib_uverbs_attr::(anonymous at ../include/uapi/rdma/rdma_user_ioctl_cmds.h:65:2) 
         8 |     __u64 data
         8 |     __s64 data_s64
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_query_port_resp
         0 |   __u32 port_cap_flags
         4 |   __u32 max_msg_sz
         8 |   __u32 bad_pkey_cntr
        12 |   __u32 qkey_viol_cntr
        16 |   __u32 gid_tbl_len
        20 |   __u16 pkey_tbl_len
        22 |   __u16 lid
        24 |   __u16 sm_lid
        26 |   __u8 state
        27 |   __u8 max_mtu
        28 |   __u8 active_mtu
        29 |   __u8 lmc
        30 |   __u8 max_vl_num
        31 |   __u8 sm_sl
        32 |   __u8 subnet_timeout
        33 |   __u8 init_type_reply
        34 |   __u8 active_width
        35 |   __u8 active_speed
        36 |   __u8 phys_state
        37 |   __u8 link_layer
        38 |   __u8 flags
        39 |   __u8 reserved
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2)
         0 |   __be64 subnet_prefix
         8 |   __be64 interface_id
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | union ib_gid
         0 |   u8[16] raw
         0 |   struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
         0 |     __be64 subnet_prefix
         8 |     __be64 interface_id
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_grh
         0 |   __be32 version_tclass_flow
         4 |   __be16 paylen
         6 |   u8 next_hdr
         7 |   u8 hop_limit
         8 |   union ib_gid sgid
         8 |     u8[16] raw
         8 |     struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
         8 |       __be64 subnet_prefix
        16 |       __be64 interface_id
        24 |   union ib_gid dgid
        24 |     u8[16] raw
        24 |     struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
        24 |       __be64 subnet_prefix
        32 |       __be64 interface_id
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct ib_ah_attr
         0 |   u16 dlid
         2 |   u8 src_path_bits
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct ib_global_route
         0 |   const struct ib_gid_attr * sgid_attr
         8 |   union ib_gid dgid
         8 |     u8[16] raw
         8 |     struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
         8 |       __be64 subnet_prefix
        16 |       __be64 interface_id
        24 |   u32 flow_label
        28 |   u8 sgid_index
        29 |   u8 hop_limit
        30 |   u8 traffic_class
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | union ib_wc::(anonymous at ../include/rdma/ib_verbs.h:1015:2)
         0 |   u64 wr_id
         0 |   struct ib_cqe * wr_cqe
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1079:4)
         0 |   struct ib_xrcd * xrcd
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_qp_attr_mask::(unnamed at ../include/rdma/ib_verbs.h:1264:29)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2)
         0 |   u64 wr_id
         0 |   struct ib_cqe * wr_cqe
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2)
         0 |   __be32 imm_data
         0 |   u32 invalidate_rkey
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_send_wr
         0 |   struct ib_send_wr * next
         8 |   union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2) 
         8 |     u64 wr_id
         8 |     struct ib_cqe * wr_cqe
        16 |   struct ib_sge * sg_list
        20 |   int num_sge
        24 |   enum ib_wr_opcode opcode
        28 |   int send_flags
        32 |   union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2) ex
        32 |     __be32 imm_data
        32 |     u32 invalidate_rkey
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct ib_rdma_wr
         0 |   struct ib_send_wr wr
         0 |     struct ib_send_wr * next
         8 |     union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2) 
         8 |       u64 wr_id
         8 |       struct ib_cqe * wr_cqe
        16 |     struct ib_sge * sg_list
        20 |     int num_sge
        24 |     enum ib_wr_opcode opcode
        28 |     int send_flags
        32 |     union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2) ex
        32 |       __be32 imm_data
        32 |       u32 invalidate_rkey
        40 |   u64 remote_addr
        48 |   u32 rkey
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct ib_atomic_wr
         0 |   struct ib_send_wr wr
         0 |     struct ib_send_wr * next
         8 |     union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2) 
         8 |       u64 wr_id
         8 |       struct ib_cqe * wr_cqe
        16 |     struct ib_sge * sg_list
        20 |     int num_sge
        24 |     enum ib_wr_opcode opcode
        28 |     int send_flags
        32 |     union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2) ex
        32 |       __be32 imm_data
        32 |       u32 invalidate_rkey
        40 |   u64 remote_addr
        48 |   u64 compare_add
        56 |   u64 swap
        64 |   u64 compare_add_mask
        72 |   u64 swap_mask
        80 |   u32 rkey
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct ib_ud_wr
         0 |   struct ib_send_wr wr
         0 |     struct ib_send_wr * next
         8 |     union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2) 
         8 |       u64 wr_id
         8 |       struct ib_cqe * wr_cqe
        16 |     struct ib_sge * sg_list
        20 |     int num_sge
        24 |     enum ib_wr_opcode opcode
        28 |     int send_flags
        32 |     union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2) ex
        32 |       __be32 imm_data
        32 |       u32 invalidate_rkey
        40 |   struct ib_ah * ah
        44 |   void * header
        48 |   int hlen
        52 |   int mss
        56 |   u32 remote_qpn
        60 |   u32 remote_qkey
        64 |   u16 pkey_index
        68 |   u32 port_num
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct ib_reg_wr
         0 |   struct ib_send_wr wr
         0 |     struct ib_send_wr * next
         8 |     union ib_send_wr::(anonymous at ../include/rdma/ib_verbs.h:1382:2) 
         8 |       u64 wr_id
         8 |       struct ib_cqe * wr_cqe
        16 |     struct ib_sge * sg_list
        20 |     int num_sge
        24 |     enum ib_wr_opcode opcode
        28 |     int send_flags
        32 |     union ib_send_wr::(unnamed at ../include/rdma/ib_verbs.h:1390:2) ex
        32 |       __be32 imm_data
        32 |       u32 invalidate_rkey
        40 |   struct ib_mr * mr
        44 |   u32 key
        48 |   int access
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct irq_poll
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   unsigned long state
        12 |   int weight
        16 |   irq_poll_fn * poll
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1636:4)
         0 |   struct ib_xrcd * xrcd
         4 |   u32 srq_num
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_port_pkey
         0 |   enum port_pkey_state state
         4 |   u16 pkey_index
         8 |   u32 port_num
        12 |   struct list_head qp_list
        12 |     struct list_head * next
        16 |     struct list_head * prev
        20 |   struct list_head to_error_list
        20 |     struct list_head * next
        24 |     struct list_head * prev
        28 |   struct ib_qp_security * sec
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec::(anonymous at ../include/rdma/ib_verbs.h:2066:2)
         0 |   u32 type
         4 |   u16 size
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_eth_filter
         0 |   u8[6] dst_mac
         6 |   u8[6] src_mac
        12 |   __be16 ether_type
        14 |   __be16 vlan_tag
           | [sizeof=16, align=2]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_eth
         0 |   u32 type
         4 |   u16 size
         6 |   struct ib_flow_eth_filter val
         6 |     u8[6] dst_mac
        12 |     u8[6] src_mac
        18 |     __be16 ether_type
        20 |     __be16 vlan_tag
        22 |   struct ib_flow_eth_filter mask
        22 |     u8[6] dst_mac
        28 |     u8[6] src_mac
        34 |     __be16 ether_type
        36 |     __be16 vlan_tag
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_ib_filter
         0 |   __be16 dlid
         2 |   __u8 sl
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_ib
         0 |   u32 type
         4 |   u16 size
         6 |   struct ib_flow_ib_filter val
         6 |     __be16 dlid
         8 |     __u8 sl
        10 |   struct ib_flow_ib_filter mask
        10 |     __be16 dlid
        12 |     __u8 sl
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_ipv4_filter
         0 |   __be32 src_ip
         4 |   __be32 dst_ip
         8 |   u8 proto
         9 |   u8 tos
        10 |   u8 ttl
        11 |   u8 flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_ipv4
         0 |   u32 type
         4 |   u16 size
         8 |   struct ib_flow_ipv4_filter val
         8 |     __be32 src_ip
        12 |     __be32 dst_ip
        16 |     u8 proto
        17 |     u8 tos
        18 |     u8 ttl
        19 |     u8 flags
        20 |   struct ib_flow_ipv4_filter mask
        20 |     __be32 src_ip
        24 |     __be32 dst_ip
        28 |     u8 proto
        29 |     u8 tos
        30 |     u8 ttl
        31 |     u8 flags
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_tcp_udp_filter
         0 |   __be16 dst_port
         2 |   __be16 src_port
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_tcp_udp
         0 |   u32 type
         4 |   u16 size
         6 |   struct ib_flow_tcp_udp_filter val
         6 |     __be16 dst_port
         8 |     __be16 src_port
        10 |   struct ib_flow_tcp_udp_filter mask
        10 |     __be16 dst_port
        12 |     __be16 src_port
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_ipv6_filter
         0 |   u8[16] src_ip
        16 |   u8[16] dst_ip
        32 |   __be32 flow_label
        36 |   u8 next_hdr
        37 |   u8 traffic_class
        38 |   u8 hop_limit
           | [sizeof=39, align=1]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_ipv6
         0 |   u32 type
         4 |   u16 size
         6 |   struct ib_flow_ipv6_filter val
         6 |     u8[16] src_ip
        22 |     u8[16] dst_ip
        38 |     __be32 flow_label
        42 |     u8 next_hdr
        43 |     u8 traffic_class
        44 |     u8 hop_limit
        45 |   struct ib_flow_ipv6_filter mask
        45 |     u8[16] src_ip
        61 |     u8[16] dst_ip
        77 |     __be32 flow_label
        81 |     u8 next_hdr
        82 |     u8 traffic_class
        83 |     u8 hop_limit
           | [sizeof=84, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_tunnel_filter
         0 |   __be32 tunnel_id
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_tunnel
         0 |   u32 type
         4 |   u16 size
         8 |   struct ib_flow_tunnel_filter val
         8 |     __be32 tunnel_id
        12 |   struct ib_flow_tunnel_filter mask
        12 |     __be32 tunnel_id
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_esp_filter
         0 |   __be32 spi
         4 |   __be32 seq
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_esp
         0 |   u32 type
         4 |   u16 size
         8 |   struct ib_flow_esp_filter val
         8 |     __be32 spi
        12 |     __be32 seq
        16 |   struct ib_flow_esp_filter mask
        16 |     __be32 spi
        20 |     __be32 seq
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_gre_filter
         0 |   __be16 c_ks_res0_ver
         2 |   __be16 protocol
         4 |   __be32 key
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_gre
         0 |   u32 type
         4 |   u16 size
         8 |   struct ib_flow_gre_filter val
         8 |     __be16 c_ks_res0_ver
        10 |     __be16 protocol
        12 |     __be32 key
        16 |   struct ib_flow_gre_filter mask
        16 |     __be16 c_ks_res0_ver
        18 |     __be16 protocol
        20 |     __be32 key
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_mpls_filter
         0 |   __be32 tag
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_mpls
         0 |   u32 type
         4 |   u16 size
         8 |   struct ib_flow_mpls_filter val
         8 |     __be32 tag
        12 |   struct ib_flow_mpls_filter mask
        12 |     __be32 tag
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_action_tag
         0 |   enum ib_flow_spec_type type
         4 |   u16 size
         8 |   u32 tag_id
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_action_drop
         0 |   enum ib_flow_spec_type type
         4 |   u16 size
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_action_handle
         0 |   enum ib_flow_spec_type type
         4 |   u16 size
         8 |   struct ib_flow_action * act
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_flow_spec_action_count
         0 |   enum ib_flow_spec_type type
         4 |   u16 size
         8 |   struct ib_counters * counters
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | union ib_flow_spec
         0 |   struct ib_flow_spec::(anonymous at ../include/rdma/ib_verbs.h:2066:2) 
         0 |     u32 type
         4 |     u16 size
         0 |   struct ib_flow_spec_eth eth
         0 |     u32 type
         4 |     u16 size
         6 |     struct ib_flow_eth_filter val
         6 |       u8[6] dst_mac
        12 |       u8[6] src_mac
        18 |       __be16 ether_type
        20 |       __be16 vlan_tag
        22 |     struct ib_flow_eth_filter mask
        22 |       u8[6] dst_mac
        28 |       u8[6] src_mac
        34 |       __be16 ether_type
        36 |       __be16 vlan_tag
         0 |   struct ib_flow_spec_ib ib
         0 |     u32 type
         4 |     u16 size
         6 |     struct ib_flow_ib_filter val
         6 |       __be16 dlid
         8 |       __u8 sl
        10 |     struct ib_flow_ib_filter mask
        10 |       __be16 dlid
        12 |       __u8 sl
         0 |   struct ib_flow_spec_ipv4 ipv4
         0 |     u32 type
         4 |     u16 size
         8 |     struct ib_flow_ipv4_filter val
         8 |       __be32 src_ip
        12 |       __be32 dst_ip
        16 |       u8 proto
        17 |       u8 tos
        18 |       u8 ttl
        19 |       u8 flags
        20 |     struct ib_flow_ipv4_filter mask
        20 |       __be32 src_ip
        24 |       __be32 dst_ip
        28 |       u8 proto
        29 |       u8 tos
        30 |       u8 ttl
        31 |       u8 flags
         0 |   struct ib_flow_spec_tcp_udp tcp_udp
         0 |     u32 type
         4 |     u16 size
         6 |     struct ib_flow_tcp_udp_filter val
         6 |       __be16 dst_port
         8 |       __be16 src_port
        10 |     struct ib_flow_tcp_udp_filter mask
        10 |       __be16 dst_port
        12 |       __be16 src_port
         0 |   struct ib_flow_spec_ipv6 ipv6
         0 |     u32 type
         4 |     u16 size
         6 |     struct ib_flow_ipv6_filter val
         6 |       u8[16] src_ip
        22 |       u8[16] dst_ip
        38 |       __be32 flow_label
        42 |       u8 next_hdr
        43 |       u8 traffic_class
        44 |       u8 hop_limit
        45 |     struct ib_flow_ipv6_filter mask
        45 |       u8[16] src_ip
        61 |       u8[16] dst_ip
        77 |       __be32 flow_label
        81 |       u8 next_hdr
        82 |       u8 traffic_class
        83 |       u8 hop_limit
         0 |   struct ib_flow_spec_tunnel tunnel
         0 |     u32 type
         4 |     u16 size
         8 |     struct ib_flow_tunnel_filter val
         8 |       __be32 tunnel_id
        12 |     struct ib_flow_tunnel_filter mask
        12 |       __be32 tunnel_id
         0 |   struct ib_flow_spec_esp esp
         0 |     u32 type
         4 |     u16 size
         8 |     struct ib_flow_esp_filter val
         8 |       __be32 spi
        12 |       __be32 seq
        16 |     struct ib_flow_esp_filter mask
        16 |       __be32 spi
        20 |       __be32 seq
         0 |   struct ib_flow_spec_gre gre
         0 |     u32 type
         4 |     u16 size
         8 |     struct ib_flow_gre_filter val
         8 |       __be16 c_ks_res0_ver
        10 |       __be16 protocol
        12 |       __be32 key
        16 |     struct ib_flow_gre_filter mask
        16 |       __be16 c_ks_res0_ver
        18 |       __be16 protocol
        20 |       __be32 key
         0 |   struct ib_flow_spec_mpls mpls
         0 |     u32 type
         4 |     u16 size
         8 |     struct ib_flow_mpls_filter val
         8 |       __be32 tag
        12 |     struct ib_flow_mpls_filter mask
        12 |       __be32 tag
         0 |   struct ib_flow_spec_action_tag flow_tag
         0 |     enum ib_flow_spec_type type
         4 |     u16 size
         8 |     u32 tag_id
         0 |   struct ib_flow_spec_action_drop drop
         0 |     enum ib_flow_spec_type type
         4 |     u16 size
         0 |   struct ib_flow_spec_action_handle action
         0 |     enum ib_flow_spec_type type
         4 |     u16 size
         8 |     struct ib_flow_action * act
         0 |   struct ib_flow_spec_action_count flow_count
         0 |     enum ib_flow_spec_type type
         4 |     u16 size
         8 |     struct ib_counters * counters
           | [sizeof=84, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_action_esp_keymat_aes_gcm
         0 |   __u64 iv
         8 |   __u32 iv_algo
        12 |   __u32 salt
        16 |   __u32 icv_len
        20 |   __u32 key_len
        24 |   __u32[8] aes_key
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_action_esp_replay_bmp
         0 |   __u32 size
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_device_ops
         0 |   struct module * owner
         4 |   enum rdma_driver_id driver_id
         8 |   u32 uverbs_abi_ver
    12:0-0 |   unsigned int uverbs_no_driver_id_binding
        16 |   const struct attribute_group * device_group
        20 |   const struct attribute_group ** port_groups
        24 |   int (*)(struct ib_qp *, const struct ib_send_wr *, const struct ib_send_wr **) post_send
        28 |   int (*)(struct ib_qp *, const struct ib_recv_wr *, const struct ib_recv_wr **) post_recv
        32 |   void (*)(struct ib_qp *) drain_rq
        36 |   void (*)(struct ib_qp *) drain_sq
        40 |   int (*)(struct ib_cq *, int, struct ib_wc *) poll_cq
        44 |   int (*)(struct ib_cq *, int) peek_cq
        48 |   int (*)(struct ib_cq *, enum ib_cq_notify_flags) req_notify_cq
        52 |   int (*)(struct ib_srq *, const struct ib_recv_wr *, const struct ib_recv_wr **) post_srq_recv
        56 |   int (*)(struct ib_device *, int, u32, const struct ib_wc *, const struct ib_grh *, const struct ib_mad *, struct ib_mad *, size_t *, u16 *) process_mad
        60 |   int (*)(struct ib_device *, struct ib_device_attr *, struct ib_udata *) query_device
        64 |   int (*)(struct ib_device *, int, struct ib_device_modify *) modify_device
        68 |   void (*)(struct ib_device *, char *) get_dev_fw_str
        72 |   const struct cpumask *(*)(struct ib_device *, int) get_vector_affinity
        76 |   int (*)(struct ib_device *, u32, struct ib_port_attr *) query_port
        80 |   int (*)(struct ib_device *, u32, int, struct ib_port_modify *) modify_port
        84 |   int (*)(struct ib_device *, u32, struct ib_port_immutable *) get_port_immutable
        88 |   enum rdma_link_layer (*)(struct ib_device *, u32) get_link_layer
        92 |   struct net_device *(*)(struct ib_device *, u32) get_netdev
        96 |   struct net_device *(*)(struct ib_device *, u32, enum rdma_netdev_t, const char *, unsigned char, void (*)(struct net_device *)) alloc_rdma_netdev
       100 |   int (*)(struct ib_device *, u32, enum rdma_netdev_t, struct rdma_netdev_alloc_params *) rdma_netdev_get_params
       104 |   int (*)(struct ib_device *, u32, int, union ib_gid *) query_gid
       108 |   int (*)(const struct ib_gid_attr *, void **) add_gid
       112 |   int (*)(const struct ib_gid_attr *, void **) del_gid
       116 |   int (*)(struct ib_device *, u32, u16, u16 *) query_pkey
       120 |   int (*)(struct ib_ucontext *, struct ib_udata *) alloc_ucontext
       124 |   void (*)(struct ib_ucontext *) dealloc_ucontext
       128 |   int (*)(struct ib_ucontext *, struct vm_area_struct *) mmap
       132 |   void (*)(struct rdma_user_mmap_entry *) mmap_free
       136 |   void (*)(struct ib_ucontext *) disassociate_ucontext
       140 |   int (*)(struct ib_pd *, struct ib_udata *) alloc_pd
       144 |   int (*)(struct ib_pd *, struct ib_udata *) dealloc_pd
       148 |   int (*)(struct ib_ah *, struct rdma_ah_init_attr *, struct ib_udata *) create_ah
       152 |   int (*)(struct ib_ah *, struct rdma_ah_init_attr *, struct ib_udata *) create_user_ah
       156 |   int (*)(struct ib_ah *, struct rdma_ah_attr *) modify_ah
       160 |   int (*)(struct ib_ah *, struct rdma_ah_attr *) query_ah
       164 |   int (*)(struct ib_ah *, u32) destroy_ah
       168 |   int (*)(struct ib_srq *, struct ib_srq_init_attr *, struct ib_udata *) create_srq
       172 |   int (*)(struct ib_srq *, struct ib_srq_attr *, enum ib_srq_attr_mask, struct ib_udata *) modify_srq
       176 |   int (*)(struct ib_srq *, struct ib_srq_attr *) query_srq
       180 |   int (*)(struct ib_srq *, struct ib_udata *) destroy_srq
       184 |   int (*)(struct ib_qp *, struct ib_qp_init_attr *, struct ib_udata *) create_qp
       188 |   int (*)(struct ib_qp *, struct ib_qp_attr *, int, struct ib_udata *) modify_qp
       192 |   int (*)(struct ib_qp *, struct ib_qp_attr *, int, struct ib_qp_init_attr *) query_qp
       196 |   int (*)(struct ib_qp *, struct ib_udata *) destroy_qp
       200 |   int (*)(struct ib_cq *, const struct ib_cq_init_attr *, struct uverbs_attr_bundle *) create_cq
       204 |   int (*)(struct ib_cq *, u16, u16) modify_cq
       208 |   int (*)(struct ib_cq *, struct ib_udata *) destroy_cq
       212 |   int (*)(struct ib_cq *, int, struct ib_udata *) resize_cq
       216 |   struct ib_mr *(*)(struct ib_pd *, int) get_dma_mr
       220 |   struct ib_mr *(*)(struct ib_pd *, u64, u64, u64, int, struct ib_udata *) reg_user_mr
       224 |   struct ib_mr *(*)(struct ib_pd *, u64, u64, u64, int, int, struct ib_udata *) reg_user_mr_dmabuf
       228 |   struct ib_mr *(*)(struct ib_mr *, int, u64, u64, u64, int, struct ib_pd *, struct ib_udata *) rereg_user_mr
       232 |   int (*)(struct ib_mr *, struct ib_udata *) dereg_mr
       236 |   struct ib_mr *(*)(struct ib_pd *, enum ib_mr_type, u32) alloc_mr
       240 |   struct ib_mr *(*)(struct ib_pd *, u32, u32) alloc_mr_integrity
       244 |   int (*)(struct ib_pd *, enum ib_uverbs_advise_mr_advice, u32, struct ib_sge *, u32, struct uverbs_attr_bundle *) advise_mr
       248 |   int (*)(struct ib_mr *, struct scatterlist *, int, unsigned int *) map_mr_sg
       252 |   int (*)(struct ib_mr *, u32, struct ib_mr_status *) check_mr_status
       256 |   int (*)(struct ib_mw *, struct ib_udata *) alloc_mw
       260 |   int (*)(struct ib_mw *) dealloc_mw
       264 |   int (*)(struct ib_qp *, union ib_gid *, u16) attach_mcast
       268 |   int (*)(struct ib_qp *, union ib_gid *, u16) detach_mcast
       272 |   int (*)(struct ib_xrcd *, struct ib_udata *) alloc_xrcd
       276 |   int (*)(struct ib_xrcd *, struct ib_udata *) dealloc_xrcd
       280 |   struct ib_flow *(*)(struct ib_qp *, struct ib_flow_attr *, struct ib_udata *) create_flow
       284 |   int (*)(struct ib_flow *) destroy_flow
       288 |   int (*)(struct ib_flow_action *) destroy_flow_action
       292 |   int (*)(struct ib_device *, int, u32, int) set_vf_link_state
       296 |   int (*)(struct ib_device *, int, u32, struct ifla_vf_info *) get_vf_config
       300 |   int (*)(struct ib_device *, int, u32, struct ifla_vf_stats *) get_vf_stats
       304 |   int (*)(struct ib_device *, int, u32, struct ifla_vf_guid *, struct ifla_vf_guid *) get_vf_guid
       308 |   int (*)(struct ib_device *, int, u32, u64, int) set_vf_guid
       312 |   struct ib_wq *(*)(struct ib_pd *, struct ib_wq_init_attr *, struct ib_udata *) create_wq
       316 |   int (*)(struct ib_wq *, struct ib_udata *) destroy_wq
       320 |   int (*)(struct ib_wq *, struct ib_wq_attr *, u32, struct ib_udata *) modify_wq
       324 |   int (*)(struct ib_rwq_ind_table *, struct ib_rwq_ind_table_init_attr *, struct ib_udata *) create_rwq_ind_table
       328 |   int (*)(struct ib_rwq_ind_table *) destroy_rwq_ind_table
       332 |   struct ib_dm *(*)(struct ib_device *, struct ib_ucontext *, struct ib_dm_alloc_attr *, struct uverbs_attr_bundle *) alloc_dm
       336 |   int (*)(struct ib_dm *, struct uverbs_attr_bundle *) dealloc_dm
       340 |   struct ib_mr *(*)(struct ib_pd *, struct ib_dm *, struct ib_dm_mr_attr *, struct uverbs_attr_bundle *) reg_dm_mr
       344 |   int (*)(struct ib_counters *, struct uverbs_attr_bundle *) create_counters
       348 |   int (*)(struct ib_counters *) destroy_counters
       352 |   int (*)(struct ib_counters *, struct ib_counters_read_attr *, struct uverbs_attr_bundle *) read_counters
       356 |   int (*)(struct ib_mr *, struct scatterlist *, int, unsigned int *, struct scatterlist *, int, unsigned int *) map_mr_sg_pi
       360 |   struct rdma_hw_stats *(*)(struct ib_device *) alloc_hw_device_stats
       364 |   struct rdma_hw_stats *(*)(struct ib_device *, u32) alloc_hw_port_stats
       368 |   int (*)(struct ib_device *, struct rdma_hw_stats *, u32, int) get_hw_stats
       372 |   int (*)(struct ib_device *, u32, unsigned int, bool) modify_hw_stat
       376 |   int (*)(struct sk_buff *, struct ib_mr *) fill_res_mr_entry
       380 |   int (*)(struct sk_buff *, struct ib_mr *) fill_res_mr_entry_raw
       384 |   int (*)(struct sk_buff *, struct ib_cq *) fill_res_cq_entry
       388 |   int (*)(struct sk_buff *, struct ib_cq *) fill_res_cq_entry_raw
       392 |   int (*)(struct sk_buff *, struct ib_qp *) fill_res_qp_entry
       396 |   int (*)(struct sk_buff *, struct ib_qp *) fill_res_qp_entry_raw
       400 |   int (*)(struct sk_buff *, struct rdma_cm_id *) fill_res_cm_id_entry
       404 |   int (*)(struct sk_buff *, struct ib_srq *) fill_res_srq_entry
       408 |   int (*)(struct sk_buff *, struct ib_srq *) fill_res_srq_entry_raw
       412 |   int (*)(struct ib_device *) enable_driver
       416 |   void (*)(struct ib_device *) dealloc_driver
       420 |   void (*)(struct ib_qp *) iw_add_ref
       424 |   void (*)(struct ib_qp *) iw_rem_ref
       428 |   struct ib_qp *(*)(struct ib_device *, int) iw_get_qp
       432 |   int (*)(struct iw_cm_id *, struct iw_cm_conn_param *) iw_connect
       436 |   int (*)(struct iw_cm_id *, struct iw_cm_conn_param *) iw_accept
       440 |   int (*)(struct iw_cm_id *, const void *, u8) iw_reject
       444 |   int (*)(struct iw_cm_id *, int) iw_create_listen
       448 |   int (*)(struct iw_cm_id *) iw_destroy_listen
       452 |   int (*)(struct rdma_counter *, struct ib_qp *) counter_bind_qp
       456 |   int (*)(struct ib_qp *) counter_unbind_qp
       460 |   int (*)(struct rdma_counter *) counter_dealloc
       464 |   struct rdma_hw_stats *(*)(struct rdma_counter *) counter_alloc_stats
       468 |   int (*)(struct rdma_counter *) counter_update_stats
       472 |   int (*)(struct sk_buff *, struct ib_mr *) fill_stat_mr_entry
       476 |   int (*)(struct ib_ucontext *, struct uverbs_attr_bundle *) query_ucontext
       480 |   int (*)(struct ib_device *) get_numa_node
       484 |   struct ib_device *(*)(struct ib_device *, enum rdma_nl_dev_type, const char *) add_sub_dev
       488 |   void (*)(struct ib_device *) del_sub_dev
       492 |   size_t size_ib_ah
       496 |   size_t size_ib_counters
       500 |   size_t size_ib_cq
       504 |   size_t size_ib_mw
       508 |   size_t size_ib_pd
       512 |   size_t size_ib_qp
       516 |   size_t size_ib_rwq_ind_table
       520 |   size_t size_ib_srq
       524 |   size_t size_ib_ucontext
       528 |   size_t size_ib_xrcd
           | [sizeof=532, align=4]

*** Dumping AST Record Layout
         0 | struct ib_core_device
         0 |   struct device dev
         0 |     struct kobject kobj
         0 |       const char * name
         4 |       struct list_head entry
         4 |         struct list_head * next
         8 |         struct list_head * prev
        12 |       struct kobject * parent
        16 |       struct kset * kset
        20 |       const struct kobj_type * ktype
        24 |       struct kernfs_node * sd
        28 |       struct kref kref
        28 |         struct refcount_struct refcount
        28 |           atomic_t refs
        28 |             int counter
    32:0-0 |       unsigned int state_initialized
    32:1-1 |       unsigned int state_in_sysfs
    32:2-2 |       unsigned int state_add_uevent_sent
    32:3-3 |       unsigned int state_remove_uevent_sent
    32:4-4 |       unsigned int uevent_suppress
        36 |       struct delayed_work release
        36 |         struct work_struct work
        36 |           atomic_t data
        36 |             int counter
        40 |           struct list_head entry
        40 |             struct list_head * next
        44 |             struct list_head * prev
        48 |           work_func_t func
        52 |           struct lockdep_map lockdep_map
        52 |             struct lock_class_key * key
        56 |             struct lock_class *[2] class_cache
        64 |             const char * name
        68 |             u8 wait_type_outer
        69 |             u8 wait_type_inner
        70 |             u8 lock_type
        72 |             int cpu
        76 |             unsigned long ip
        80 |         struct timer_list timer
        80 |           struct hlist_node entry
        80 |             struct hlist_node * next
        84 |             struct hlist_node ** pprev
        88 |           unsigned long expires
        92 |           void (*)(struct timer_list *) function
        96 |           u32 flags
       100 |           struct lockdep_map lockdep_map
       100 |             struct lock_class_key * key
       104 |             struct lock_class *[2] class_cache
       112 |             const char * name
       116 |             u8 wait_type_outer
       117 |             u8 wait_type_inner
       118 |             u8 lock_type
       120 |             int cpu
       124 |             unsigned long ip
       128 |         struct workqueue_struct * wq
       132 |         int cpu
       136 |     struct device * parent
       140 |     struct device_private * p
       144 |     const char * init_name
       148 |     const struct device_type * type
       152 |     const struct bus_type * bus
       156 |     struct device_driver * driver
       160 |     void * platform_data
       164 |     void * driver_data
       168 |     struct mutex mutex
       168 |       atomic_t owner
       168 |         int counter
       172 |       struct raw_spinlock wait_lock
       172 |         arch_spinlock_t raw_lock
       172 |           volatile unsigned int slock
       176 |         unsigned int magic
       180 |         unsigned int owner_cpu
       184 |         void * owner
       188 |         struct lockdep_map dep_map
       188 |           struct lock_class_key * key
       192 |           struct lock_class *[2] class_cache
       200 |           const char * name
       204 |           u8 wait_type_outer
       205 |           u8 wait_type_inner
       206 |           u8 lock_type
       208 |           int cpu
       212 |           unsigned long ip
       216 |       struct list_head wait_list
       216 |         struct list_head * next
       220 |         struct list_head * prev
       224 |       void * magic
       228 |       struct lockdep_map dep_map
       228 |         struct lock_class_key * key
       232 |         struct lock_class *[2] class_cache
       240 |         const char * name
       244 |         u8 wait_type_outer
       245 |         u8 wait_type_inner
       246 |         u8 lock_type
       248 |         int cpu
       252 |         unsigned long ip
       256 |     struct dev_links_info links
       256 |       struct list_head suppliers
       256 |         struct list_head * next
       260 |         struct list_head * prev
       264 |       struct list_head consumers
       264 |         struct list_head * next
       268 |         struct list_head * prev
       272 |       struct list_head defer_sync
       272 |         struct list_head * next
       276 |         struct list_head * prev
       280 |       enum dl_dev_state status
       284 |     struct dev_pm_info power
       284 |       struct pm_message power_state
       284 |         int event
   288:0-0 |       bool can_wakeup
   288:1-1 |       bool async_suspend
   288:2-2 |       bool in_dpm_list
   288:3-3 |       bool is_prepared
   288:4-4 |       bool is_suspended
   288:5-5 |       bool is_noirq_suspended
   288:6-6 |       bool is_late_suspended
   288:7-7 |       bool no_pm
   289:0-0 |       bool early_init
   289:1-1 |       bool direct_complete
       292 |       u32 driver_flags
       296 |       struct spinlock lock
       296 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       296 |           struct raw_spinlock rlock
       296 |             arch_spinlock_t raw_lock
       296 |               volatile unsigned int slock
       300 |             unsigned int magic
       304 |             unsigned int owner_cpu
       308 |             void * owner
       312 |             struct lockdep_map dep_map
       312 |               struct lock_class_key * key
       316 |               struct lock_class *[2] class_cache
       324 |               const char * name
       328 |               u8 wait_type_outer
       329 |               u8 wait_type_inner
       330 |               u8 lock_type
       332 |               int cpu
       336 |               unsigned long ip
       296 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       296 |             u8[16] __padding
       312 |             struct lockdep_map dep_map
       312 |               struct lock_class_key * key
       316 |               struct lock_class *[2] class_cache
       324 |               const char * name
       328 |               u8 wait_type_outer
       329 |               u8 wait_type_inner
       330 |               u8 lock_type
       332 |               int cpu
       336 |               unsigned long ip
   340:0-0 |       bool should_wakeup
       344 |       struct pm_subsys_data * subsys_data
       348 |       void (*)(struct device *, s32) set_latency_tolerance
       352 |       struct dev_pm_qos * qos
       356 |     struct dev_pm_domain * pm_domain
       360 |     struct dev_msi_info msi
       360 |     u64 * dma_mask
       368 |     u64 coherent_dma_mask
       376 |     u64 bus_dma_limit
       384 |     const struct bus_dma_region * dma_range_map
       388 |     struct device_dma_parameters * dma_parms
       392 |     struct list_head dma_pools
       392 |       struct list_head * next
       396 |       struct list_head * prev
       400 |     struct dma_coherent_mem * dma_mem
       404 |     struct dev_archdata archdata
       404 |     struct device_node * of_node
       408 |     struct fwnode_handle * fwnode
       412 |     dev_t devt
       416 |     u32 id
       420 |     struct spinlock devres_lock
       420 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       420 |         struct raw_spinlock rlock
       420 |           arch_spinlock_t raw_lock
       420 |             volatile unsigned int slock
       424 |           unsigned int magic
       428 |           unsigned int owner_cpu
       432 |           void * owner
       436 |           struct lockdep_map dep_map
       436 |             struct lock_class_key * key
       440 |             struct lock_class *[2] class_cache
       448 |             const char * name
       452 |             u8 wait_type_outer
       453 |             u8 wait_type_inner
       454 |             u8 lock_type
       456 |             int cpu
       460 |             unsigned long ip
       420 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       420 |           u8[16] __padding
       436 |           struct lockdep_map dep_map
       436 |             struct lock_class_key * key
       440 |             struct lock_class *[2] class_cache
       448 |             const char * name
       452 |             u8 wait_type_outer
       453 |             u8 wait_type_inner
       454 |             u8 lock_type
       456 |             int cpu
       460 |             unsigned long ip
       464 |     struct list_head devres_head
       464 |       struct list_head * next
       468 |       struct list_head * prev
       472 |     const struct class * class
       476 |     const struct attribute_group ** groups
       480 |     void (*)(struct device *) release
       484 |     struct iommu_group * iommu_group
       488 |     struct dev_iommu * iommu
       492 |     struct device_physical_location * physical_location
       496 |     enum device_removable removable
   500:0-0 |     bool offline_disabled
   500:1-1 |     bool offline
   500:2-2 |     bool of_node_reused
   500:3-3 |     bool state_synced
   500:4-4 |     bool can_match
   500:5-5 |     bool dma_coherent
   500:6-6 |     bool dma_skip_sync
       504 |   possible_net_t rdma_net
       504 |     struct net * net
       508 |   struct kobject * ports_kobj
       512 |   struct list_head port_list
       512 |     struct list_head * next
       516 |     struct list_head * prev
       520 |   struct ib_device * owner
           | [sizeof=528, align=8]

*** Dumping AST Record Layout
         0 | union ib_device::(anonymous at ../include/rdma/ib_verbs.h:2729:2)
         0 |   struct device dev
         0 |     struct kobject kobj
         0 |       const char * name
         4 |       struct list_head entry
         4 |         struct list_head * next
         8 |         struct list_head * prev
        12 |       struct kobject * parent
        16 |       struct kset * kset
        20 |       const struct kobj_type * ktype
        24 |       struct kernfs_node * sd
        28 |       struct kref kref
        28 |         struct refcount_struct refcount
        28 |           atomic_t refs
        28 |             int counter
    32:0-0 |       unsigned int state_initialized
    32:1-1 |       unsigned int state_in_sysfs
    32:2-2 |       unsigned int state_add_uevent_sent
    32:3-3 |       unsigned int state_remove_uevent_sent
    32:4-4 |       unsigned int uevent_suppress
        36 |       struct delayed_work release
        36 |         struct work_struct work
        36 |           atomic_t data
        36 |             int counter
        40 |           struct list_head entry
        40 |             struct list_head * next
        44 |             struct list_head * prev
        48 |           work_func_t func
        52 |           struct lockdep_map lockdep_map
        52 |             struct lock_class_key * key
        56 |             struct lock_class *[2] class_cache
        64 |             const char * name
        68 |             u8 wait_type_outer
        69 |             u8 wait_type_inner
        70 |             u8 lock_type
        72 |             int cpu
        76 |             unsigned long ip
        80 |         struct timer_list timer
        80 |           struct hlist_node entry
        80 |             struct hlist_node * next
        84 |             struct hlist_node ** pprev
        88 |           unsigned long expires
        92 |           void (*)(struct timer_list *) function
        96 |           u32 flags
       100 |           struct lockdep_map lockdep_map
       100 |             struct lock_class_key * key
       104 |             struct lock_class *[2] class_cache
       112 |             const char * name
       116 |             u8 wait_type_outer
       117 |             u8 wait_type_inner
       118 |             u8 lock_type
       120 |             int cpu
       124 |             unsigned long ip
       128 |         struct workqueue_struct * wq
       132 |         int cpu
       136 |     struct device * parent
       140 |     struct device_private * p
       144 |     const char * init_name
       148 |     const struct device_type * type
       152 |     const struct bus_type * bus
       156 |     struct device_driver * driver
       160 |     void * platform_data
       164 |     void * driver_data
       168 |     struct mutex mutex
       168 |       atomic_t owner
       168 |         int counter
       172 |       struct raw_spinlock wait_lock
       172 |         arch_spinlock_t raw_lock
       172 |           volatile unsigned int slock
       176 |         unsigned int magic
       180 |         unsigned int owner_cpu
       184 |         void * owner
       188 |         struct lockdep_map dep_map
       188 |           struct lock_class_key * key
       192 |           struct lock_class *[2] class_cache
       200 |           const char * name
       204 |           u8 wait_type_outer
       205 |           u8 wait_type_inner
       206 |           u8 lock_type
       208 |           int cpu
       212 |           unsigned long ip
       216 |       struct list_head wait_list
       216 |         struct list_head * next
       220 |         struct list_head * prev
       224 |       void * magic
       228 |       struct lockdep_map dep_map
       228 |         struct lock_class_key * key
       232 |         struct lock_class *[2] class_cache
       240 |         const char * name
       244 |         u8 wait_type_outer
       245 |         u8 wait_type_inner
       246 |         u8 lock_type
       248 |         int cpu
       252 |         unsigned long ip
       256 |     struct dev_links_info links
       256 |       struct list_head suppliers
       256 |         struct list_head * next
       260 |         struct list_head * prev
       264 |       struct list_head consumers
       264 |         struct list_head * next
       268 |         struct list_head * prev
       272 |       struct list_head defer_sync
       272 |         struct list_head * next
       276 |         struct list_head * prev
       280 |       enum dl_dev_state status
       284 |     struct dev_pm_info power
       284 |       struct pm_message power_state
       284 |         int event
   288:0-0 |       bool can_wakeup
   288:1-1 |       bool async_suspend
   288:2-2 |       bool in_dpm_list
   288:3-3 |       bool is_prepared
   288:4-4 |       bool is_suspended
   288:5-5 |       bool is_noirq_suspended
   288:6-6 |       bool is_late_suspended
   288:7-7 |       bool no_pm
   289:0-0 |       bool early_init
   289:1-1 |       bool direct_complete
       292 |       u32 driver_flags
       296 |       struct spinlock lock
       296 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       296 |           struct raw_spinlock rlock
       296 |             arch_spinlock_t raw_lock
       296 |               volatile unsigned int slock
       300 |             unsigned int magic
       304 |             unsigned int owner_cpu
       308 |             void * owner
       312 |             struct lockdep_map dep_map
       312 |               struct lock_class_key * key
       316 |               struct lock_class *[2] class_cache
       324 |               const char * name
       328 |               u8 wait_type_outer
       329 |               u8 wait_type_inner
       330 |               u8 lock_type
       332 |               int cpu
       336 |               unsigned long ip
       296 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       296 |             u8[16] __padding
       312 |             struct lockdep_map dep_map
       312 |               struct lock_class_key * key
       316 |               struct lock_class *[2] class_cache
       324 |               const char * name
       328 |               u8 wait_type_outer
       329 |               u8 wait_type_inner
       330 |               u8 lock_type
       332 |               int cpu
       336 |               unsigned long ip
   340:0-0 |       bool should_wakeup
       344 |       struct pm_subsys_data * subsys_data
       348 |       void (*)(struct device *, s32) set_latency_tolerance
       352 |       struct dev_pm_qos * qos
       356 |     struct dev_pm_domain * pm_domain
       360 |     struct dev_msi_info msi
       360 |     u64 * dma_mask
       368 |     u64 coherent_dma_mask
       376 |     u64 bus_dma_limit
       384 |     const struct bus_dma_region * dma_range_map
       388 |     struct device_dma_parameters * dma_parms
       392 |     struct list_head dma_pools
       392 |       struct list_head * next
       396 |       struct list_head * prev
       400 |     struct dma_coherent_mem * dma_mem
       404 |     struct dev_archdata archdata
       404 |     struct device_node * of_node
       408 |     struct fwnode_handle * fwnode
       412 |     dev_t devt
       416 |     u32 id
       420 |     struct spinlock devres_lock
       420 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       420 |         struct raw_spinlock rlock
       420 |           arch_spinlock_t raw_lock
       420 |             volatile unsigned int slock
       424 |           unsigned int magic
       428 |           unsigned int owner_cpu
       432 |           void * owner
       436 |           struct lockdep_map dep_map
       436 |             struct lock_class_key * key
       440 |             struct lock_class *[2] class_cache
       448 |             const char * name
       452 |             u8 wait_type_outer
       453 |             u8 wait_type_inner
       454 |             u8 lock_type
       456 |             int cpu
       460 |             unsigned long ip
       420 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       420 |           u8[16] __padding
       436 |           struct lockdep_map dep_map
       436 |             struct lock_class_key * key
       440 |             struct lock_class *[2] class_cache
       448 |             const char * name
       452 |             u8 wait_type_outer
       453 |             u8 wait_type_inner
       454 |             u8 lock_type
       456 |             int cpu
       460 |             unsigned long ip
       464 |     struct list_head devres_head
       464 |       struct list_head * next
       468 |       struct list_head * prev
       472 |     const struct class * class
       476 |     const struct attribute_group ** groups
       480 |     void (*)(struct device *) release
       484 |     struct iommu_group * iommu_group
       488 |     struct dev_iommu * iommu
       492 |     struct device_physical_location * physical_location
       496 |     enum device_removable removable
   500:0-0 |     bool offline_disabled
   500:1-1 |     bool offline
   500:2-2 |     bool of_node_reused
   500:3-3 |     bool state_synced
   500:4-4 |     bool can_match
   500:5-5 |     bool dma_coherent
   500:6-6 |     bool dma_skip_sync
         0 |   struct ib_core_device coredev
         0 |     struct device dev
         0 |       struct kobject kobj
         0 |         const char * name
         4 |         struct list_head entry
         4 |           struct list_head * next
         8 |           struct list_head * prev
        12 |         struct kobject * parent
        16 |         struct kset * kset
        20 |         const struct kobj_type * ktype
        24 |         struct kernfs_node * sd
        28 |         struct kref kref
        28 |           struct refcount_struct refcount
        28 |             atomic_t refs
        28 |               int counter
    32:0-0 |         unsigned int state_initialized
    32:1-1 |         unsigned int state_in_sysfs
    32:2-2 |         unsigned int state_add_uevent_sent
    32:3-3 |         unsigned int state_remove_uevent_sent
    32:4-4 |         unsigned int uevent_suppress
        36 |         struct delayed_work release
        36 |           struct work_struct work
        36 |             atomic_t data
        36 |               int counter
        40 |             struct list_head entry
        40 |               struct list_head * next
        44 |               struct list_head * prev
        48 |             work_func_t func
        52 |             struct lockdep_map lockdep_map
        52 |               struct lock_class_key * key
        56 |               struct lock_class *[2] class_cache
        64 |               const char * name
        68 |               u8 wait_type_outer
        69 |               u8 wait_type_inner
        70 |               u8 lock_type
        72 |               int cpu
        76 |               unsigned long ip
        80 |           struct timer_list timer
        80 |             struct hlist_node entry
        80 |               struct hlist_node * next
        84 |               struct hlist_node ** pprev
        88 |             unsigned long expires
        92 |             void (*)(struct timer_list *) function
        96 |             u32 flags
       100 |             struct lockdep_map lockdep_map
       100 |               struct lock_class_key * key
       104 |               struct lock_class *[2] class_cache
       112 |               const char * name
       116 |               u8 wait_type_outer
       117 |               u8 wait_type_inner
       118 |               u8 lock_type
       120 |               int cpu
       124 |               unsigned long ip
       128 |           struct workqueue_struct * wq
       132 |           int cpu
       136 |       struct device * parent
       140 |       struct device_private * p
       144 |       const char * init_name
       148 |       const struct device_type * type
       152 |       const struct bus_type * bus
       156 |       struct device_driver * driver
       160 |       void * platform_data
       164 |       void * driver_data
       168 |       struct mutex mutex
       168 |         atomic_t owner
       168 |           int counter
       172 |         struct raw_spinlock wait_lock
       172 |           arch_spinlock_t raw_lock
       172 |             volatile unsigned int slock
       176 |           unsigned int magic
       180 |           unsigned int owner_cpu
       184 |           void * owner
       188 |           struct lockdep_map dep_map
       188 |             struct lock_class_key * key
       192 |             struct lock_class *[2] class_cache
       200 |             const char * name
       204 |             u8 wait_type_outer
       205 |             u8 wait_type_inner
       206 |             u8 lock_type
       208 |             int cpu
       212 |             unsigned long ip
       216 |         struct list_head wait_list
       216 |           struct list_head * next
       220 |           struct list_head * prev
       224 |         void * magic
       228 |         struct lockdep_map dep_map
       228 |           struct lock_class_key * key
       232 |           struct lock_class *[2] class_cache
       240 |           const char * name
       244 |           u8 wait_type_outer
       245 |           u8 wait_type_inner
       246 |           u8 lock_type
       248 |           int cpu
       252 |           unsigned long ip
       256 |       struct dev_links_info links
       256 |         struct list_head suppliers
       256 |           struct list_head * next
       260 |           struct list_head * prev
       264 |         struct list_head consumers
       264 |           struct list_head * next
       268 |           struct list_head * prev
       272 |         struct list_head defer_sync
       272 |           struct list_head * next
       276 |           struct list_head * prev
       280 |         enum dl_dev_state status
       284 |       struct dev_pm_info power
       284 |         struct pm_message power_state
       284 |           int event
   288:0-0 |         bool can_wakeup
   288:1-1 |         bool async_suspend
   288:2-2 |         bool in_dpm_list
   288:3-3 |         bool is_prepared
   288:4-4 |         bool is_suspended
   288:5-5 |         bool is_noirq_suspended
   288:6-6 |         bool is_late_suspended
   288:7-7 |         bool no_pm
   289:0-0 |         bool early_init
   289:1-1 |         bool direct_complete
       292 |         u32 driver_flags
       296 |         struct spinlock lock
       296 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       296 |             struct raw_spinlock rlock
       296 |               arch_spinlock_t raw_lock
       296 |                 volatile unsigned int slock
       300 |               unsigned int magic
       304 |               unsigned int owner_cpu
       308 |               void * owner
       312 |               struct lockdep_map dep_map
       312 |                 struct lock_class_key * key
       316 |                 struct lock_class *[2] class_cache
       324 |                 const char * name
       328 |                 u8 wait_type_outer
       329 |                 u8 wait_type_inner
       330 |                 u8 lock_type
       332 |                 int cpu
       336 |                 unsigned long ip
       296 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       296 |               u8[16] __padding
       312 |               struct lockdep_map dep_map
       312 |                 struct lock_class_key * key
       316 |                 struct lock_class *[2] class_cache
       324 |                 const char * name
       328 |                 u8 wait_type_outer
       329 |                 u8 wait_type_inner
       330 |                 u8 lock_type
       332 |                 int cpu
       336 |                 unsigned long ip
   340:0-0 |         bool should_wakeup
       344 |         struct pm_subsys_data * subsys_data
       348 |         void (*)(struct device *, s32) set_latency_tolerance
       352 |         struct dev_pm_qos * qos
       356 |       struct dev_pm_domain * pm_domain
       360 |       struct dev_msi_info msi
       360 |       u64 * dma_mask
       368 |       u64 coherent_dma_mask
       376 |       u64 bus_dma_limit
       384 |       const struct bus_dma_region * dma_range_map
       388 |       struct device_dma_parameters * dma_parms
       392 |       struct list_head dma_pools
       392 |         struct list_head * next
       396 |         struct list_head * prev
       400 |       struct dma_coherent_mem * dma_mem
       404 |       struct dev_archdata archdata
       404 |       struct device_node * of_node
       408 |       struct fwnode_handle * fwnode
       412 |       dev_t devt
       416 |       u32 id
       420 |       struct spinlock devres_lock
       420 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       420 |           struct raw_spinlock rlock
       420 |             arch_spinlock_t raw_lock
       420 |               volatile unsigned int slock
       424 |             unsigned int magic
       428 |             unsigned int owner_cpu
       432 |             void * owner
       436 |             struct lockdep_map dep_map
       436 |               struct lock_class_key * key
       440 |               struct lock_class *[2] class_cache
       448 |               const char * name
       452 |               u8 wait_type_outer
       453 |               u8 wait_type_inner
       454 |               u8 lock_type
       456 |               int cpu
       460 |               unsigned long ip
       420 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       420 |             u8[16] __padding
       436 |             struct lockdep_map dep_map
       436 |               struct lock_class_key * key
       440 |               struct lock_class *[2] class_cache
       448 |               const char * name
       452 |               u8 wait_type_outer
       453 |               u8 wait_type_inner
       454 |               u8 lock_type
       456 |               int cpu
       460 |               unsigned long ip
       464 |       struct list_head devres_head
       464 |         struct list_head * next
       468 |         struct list_head * prev
       472 |       const struct class * class
       476 |       const struct attribute_group ** groups
       480 |       void (*)(struct device *) release
       484 |       struct iommu_group * iommu_group
       488 |       struct dev_iommu * iommu
       492 |       struct device_physical_location * physical_location
       496 |       enum device_removable removable
   500:0-0 |       bool offline_disabled
   500:1-1 |       bool offline
   500:2-2 |       bool of_node_reused
   500:3-3 |       bool state_synced
   500:4-4 |       bool can_match
   500:5-5 |       bool dma_coherent
   500:6-6 |       bool dma_skip_sync
       504 |     possible_net_t rdma_net
       504 |       struct net * net
       508 |     struct kobject * ports_kobj
       512 |     struct list_head port_list
       512 |       struct list_head * next
       516 |       struct list_head * prev
       520 |     struct ib_device * owner
           | [sizeof=528, align=8]

*** Dumping AST Record Layout
         0 | struct ib_odp_caps::(unnamed at ../include/rdma/ib_verbs.h:335:2)
         0 |   uint32_t rc_odp_caps
         4 |   uint32_t uc_odp_caps
         8 |   uint32_t ud_odp_caps
        12 |   uint32_t xrc_odp_caps
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_odp_caps
         0 |   uint64_t general_caps
         8 |   struct ib_odp_caps::(unnamed at ../include/rdma/ib_verbs.h:335:2) per_transport_caps
         8 |     uint32_t rc_odp_caps
        12 |     uint32_t uc_odp_caps
        16 |     uint32_t ud_odp_caps
        20 |     uint32_t xrc_odp_caps
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ib_rss_caps
         0 |   u32 supported_qpts
         4 |   u32 max_rwq_indirection_tables
         8 |   u32 max_rwq_indirection_table_size
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_tm_caps
         0 |   u32 max_rndv_hdr_size
         4 |   u32 max_num_tags
         8 |   u32 flags
        12 |   u32 max_ops
        16 |   u32 max_sge
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ib_cq_caps
         0 |   u16 max_cq_moderation_count
         2 |   u16 max_cq_moderation_period
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct ib_device_attr
         0 |   u64 fw_ver
         8 |   __be64 sys_image_guid
        16 |   u64 max_mr_size
        24 |   u64 page_size_cap
        32 |   u32 vendor_id
        36 |   u32 vendor_part_id
        40 |   u32 hw_ver
        44 |   int max_qp
        48 |   int max_qp_wr
        56 |   u64 device_cap_flags
        64 |   u64 kernel_cap_flags
        72 |   int max_send_sge
        76 |   int max_recv_sge
        80 |   int max_sge_rd
        84 |   int max_cq
        88 |   int max_cqe
        92 |   int max_mr
        96 |   int max_pd
       100 |   int max_qp_rd_atom
       104 |   int max_ee_rd_atom
       108 |   int max_res_rd_atom
       112 |   int max_qp_init_rd_atom
       116 |   int max_ee_init_rd_atom
       120 |   enum ib_atomic_cap atomic_cap
       124 |   enum ib_atomic_cap masked_atomic_cap
       128 |   int max_ee
       132 |   int max_rdd
       136 |   int max_mw
       140 |   int max_raw_ipv6_qp
       144 |   int max_raw_ethy_qp
       148 |   int max_mcast_grp
       152 |   int max_mcast_qp_attach
       156 |   int max_total_mcast_qp_attach
       160 |   int max_ah
       164 |   int max_srq
       168 |   int max_srq_wr
       172 |   int max_srq_sge
       176 |   unsigned int max_fast_reg_page_list_len
       180 |   unsigned int max_pi_fast_reg_page_list_len
       184 |   u16 max_pkeys
       186 |   u8 local_ca_ack_delay
       188 |   int sig_prot_cap
       192 |   int sig_guard_cap
       200 |   struct ib_odp_caps odp_caps
       200 |     uint64_t general_caps
       208 |     struct ib_odp_caps::(unnamed at ../include/rdma/ib_verbs.h:335:2) per_transport_caps
       208 |       uint32_t rc_odp_caps
       212 |       uint32_t uc_odp_caps
       216 |       uint32_t ud_odp_caps
       220 |       uint32_t xrc_odp_caps
       224 |   uint64_t timestamp_mask
       232 |   uint64_t hca_core_clock
       240 |   struct ib_rss_caps rss_caps
       240 |     u32 supported_qpts
       244 |     u32 max_rwq_indirection_tables
       248 |     u32 max_rwq_indirection_table_size
       252 |   u32 max_wq_type_rq
       256 |   u32 raw_packet_caps
       260 |   struct ib_tm_caps tm_caps
       260 |     u32 max_rndv_hdr_size
       264 |     u32 max_num_tags
       268 |     u32 flags
       272 |     u32 max_ops
       276 |     u32 max_sge
       280 |   struct ib_cq_caps cq_caps
       280 |     u16 max_cq_moderation_count
       282 |     u16 max_cq_moderation_period
       288 |   u64 max_dm_size
       296 |   u32 max_sgl_rd
           | [sizeof=304, align=8]

*** Dumping AST Record Layout
         0 | struct ib_device
         0 |   struct device * dma_device
         4 |   struct ib_device_ops ops
         4 |     struct module * owner
         8 |     enum rdma_driver_id driver_id
        12 |     u32 uverbs_abi_ver
    16:0-0 |     unsigned int uverbs_no_driver_id_binding
        20 |     const struct attribute_group * device_group
        24 |     const struct attribute_group ** port_groups
        28 |     int (*)(struct ib_qp *, const struct ib_send_wr *, const struct ib_send_wr **) post_send
        32 |     int (*)(struct ib_qp *, const struct ib_recv_wr *, const struct ib_recv_wr **) post_recv
        36 |     void (*)(struct ib_qp *) drain_rq
        40 |     void (*)(struct ib_qp *) drain_sq
        44 |     int (*)(struct ib_cq *, int, struct ib_wc *) poll_cq
        48 |     int (*)(struct ib_cq *, int) peek_cq
        52 |     int (*)(struct ib_cq *, enum ib_cq_notify_flags) req_notify_cq
        56 |     int (*)(struct ib_srq *, const struct ib_recv_wr *, const struct ib_recv_wr **) post_srq_recv
        60 |     int (*)(struct ib_device *, int, u32, const struct ib_wc *, const struct ib_grh *, const struct ib_mad *, struct ib_mad *, size_t *, u16 *) process_mad
        64 |     int (*)(struct ib_device *, struct ib_device_attr *, struct ib_udata *) query_device
        68 |     int (*)(struct ib_device *, int, struct ib_device_modify *) modify_device
        72 |     void (*)(struct ib_device *, char *) get_dev_fw_str
        76 |     const struct cpumask *(*)(struct ib_device *, int) get_vector_affinity
        80 |     int (*)(struct ib_device *, u32, struct ib_port_attr *) query_port
        84 |     int (*)(struct ib_device *, u32, int, struct ib_port_modify *) modify_port
        88 |     int (*)(struct ib_device *, u32, struct ib_port_immutable *) get_port_immutable
        92 |     enum rdma_link_layer (*)(struct ib_device *, u32) get_link_layer
        96 |     struct net_device *(*)(struct ib_device *, u32) get_netdev
       100 |     struct net_device *(*)(struct ib_device *, u32, enum rdma_netdev_t, const char *, unsigned char, void (*)(struct net_device *)) alloc_rdma_netdev
       104 |     int (*)(struct ib_device *, u32, enum rdma_netdev_t, struct rdma_netdev_alloc_params *) rdma_netdev_get_params
       108 |     int (*)(struct ib_device *, u32, int, union ib_gid *) query_gid
       112 |     int (*)(const struct ib_gid_attr *, void **) add_gid
       116 |     int (*)(const struct ib_gid_attr *, void **) del_gid
       120 |     int (*)(struct ib_device *, u32, u16, u16 *) query_pkey
       124 |     int (*)(struct ib_ucontext *, struct ib_udata *) alloc_ucontext
       128 |     void (*)(struct ib_ucontext *) dealloc_ucontext
       132 |     int (*)(struct ib_ucontext *, struct vm_area_struct *) mmap
       136 |     void (*)(struct rdma_user_mmap_entry *) mmap_free
       140 |     void (*)(struct ib_ucontext *) disassociate_ucontext
       144 |     int (*)(struct ib_pd *, struct ib_udata *) alloc_pd
       148 |     int (*)(struct ib_pd *, struct ib_udata *) dealloc_pd
       152 |     int (*)(struct ib_ah *, struct rdma_ah_init_attr *, struct ib_udata *) create_ah
       156 |     int (*)(struct ib_ah *, struct rdma_ah_init_attr *, struct ib_udata *) create_user_ah
       160 |     int (*)(struct ib_ah *, struct rdma_ah_attr *) modify_ah
       164 |     int (*)(struct ib_ah *, struct rdma_ah_attr *) query_ah
       168 |     int (*)(struct ib_ah *, u32) destroy_ah
       172 |     int (*)(struct ib_srq *, struct ib_srq_init_attr *, struct ib_udata *) create_srq
       176 |     int (*)(struct ib_srq *, struct ib_srq_attr *, enum ib_srq_attr_mask, struct ib_udata *) modify_srq
       180 |     int (*)(struct ib_srq *, struct ib_srq_attr *) query_srq
       184 |     int (*)(struct ib_srq *, struct ib_udata *) destroy_srq
       188 |     int (*)(struct ib_qp *, struct ib_qp_init_attr *, struct ib_udata *) create_qp
       192 |     int (*)(struct ib_qp *, struct ib_qp_attr *, int, struct ib_udata *) modify_qp
       196 |     int (*)(struct ib_qp *, struct ib_qp_attr *, int, struct ib_qp_init_attr *) query_qp
       200 |     int (*)(struct ib_qp *, struct ib_udata *) destroy_qp
       204 |     int (*)(struct ib_cq *, const struct ib_cq_init_attr *, struct uverbs_attr_bundle *) create_cq
       208 |     int (*)(struct ib_cq *, u16, u16) modify_cq
       212 |     int (*)(struct ib_cq *, struct ib_udata *) destroy_cq
       216 |     int (*)(struct ib_cq *, int, struct ib_udata *) resize_cq
       220 |     struct ib_mr *(*)(struct ib_pd *, int) get_dma_mr
       224 |     struct ib_mr *(*)(struct ib_pd *, u64, u64, u64, int, struct ib_udata *) reg_user_mr
       228 |     struct ib_mr *(*)(struct ib_pd *, u64, u64, u64, int, int, struct ib_udata *) reg_user_mr_dmabuf
       232 |     struct ib_mr *(*)(struct ib_mr *, int, u64, u64, u64, int, struct ib_pd *, struct ib_udata *) rereg_user_mr
       236 |     int (*)(struct ib_mr *, struct ib_udata *) dereg_mr
       240 |     struct ib_mr *(*)(struct ib_pd *, enum ib_mr_type, u32) alloc_mr
       244 |     struct ib_mr *(*)(struct ib_pd *, u32, u32) alloc_mr_integrity
       248 |     int (*)(struct ib_pd *, enum ib_uverbs_advise_mr_advice, u32, struct ib_sge *, u32, struct uverbs_attr_bundle *) advise_mr
       252 |     int (*)(struct ib_mr *, struct scatterlist *, int, unsigned int *) map_mr_sg
       256 |     int (*)(struct ib_mr *, u32, struct ib_mr_status *) check_mr_status
       260 |     int (*)(struct ib_mw *, struct ib_udata *) alloc_mw
       264 |     int (*)(struct ib_mw *) dealloc_mw
       268 |     int (*)(struct ib_qp *, union ib_gid *, u16) attach_mcast
       272 |     int (*)(struct ib_qp *, union ib_gid *, u16) detach_mcast
       276 |     int (*)(struct ib_xrcd *, struct ib_udata *) alloc_xrcd
       280 |     int (*)(struct ib_xrcd *, struct ib_udata *) dealloc_xrcd
       284 |     struct ib_flow *(*)(struct ib_qp *, struct ib_flow_attr *, struct ib_udata *) create_flow
       288 |     int (*)(struct ib_flow *) destroy_flow
       292 |     int (*)(struct ib_flow_action *) destroy_flow_action
       296 |     int (*)(struct ib_device *, int, u32, int) set_vf_link_state
       300 |     int (*)(struct ib_device *, int, u32, struct ifla_vf_info *) get_vf_config
       304 |     int (*)(struct ib_device *, int, u32, struct ifla_vf_stats *) get_vf_stats
       308 |     int (*)(struct ib_device *, int, u32, struct ifla_vf_guid *, struct ifla_vf_guid *) get_vf_guid
       312 |     int (*)(struct ib_device *, int, u32, u64, int) set_vf_guid
       316 |     struct ib_wq *(*)(struct ib_pd *, struct ib_wq_init_attr *, struct ib_udata *) create_wq
       320 |     int (*)(struct ib_wq *, struct ib_udata *) destroy_wq
       324 |     int (*)(struct ib_wq *, struct ib_wq_attr *, u32, struct ib_udata *) modify_wq
       328 |     int (*)(struct ib_rwq_ind_table *, struct ib_rwq_ind_table_init_attr *, struct ib_udata *) create_rwq_ind_table
       332 |     int (*)(struct ib_rwq_ind_table *) destroy_rwq_ind_table
       336 |     struct ib_dm *(*)(struct ib_device *, struct ib_ucontext *, struct ib_dm_alloc_attr *, struct uverbs_attr_bundle *) alloc_dm
       340 |     int (*)(struct ib_dm *, struct uverbs_attr_bundle *) dealloc_dm
       344 |     struct ib_mr *(*)(struct ib_pd *, struct ib_dm *, struct ib_dm_mr_attr *, struct uverbs_attr_bundle *) reg_dm_mr
       348 |     int (*)(struct ib_counters *, struct uverbs_attr_bundle *) create_counters
       352 |     int (*)(struct ib_counters *) destroy_counters
       356 |     int (*)(struct ib_counters *, struct ib_counters_read_attr *, struct uverbs_attr_bundle *) read_counters
       360 |     int (*)(struct ib_mr *, struct scatterlist *, int, unsigned int *, struct scatterlist *, int, unsigned int *) map_mr_sg_pi
       364 |     struct rdma_hw_stats *(*)(struct ib_device *) alloc_hw_device_stats
       368 |     struct rdma_hw_stats *(*)(struct ib_device *, u32) alloc_hw_port_stats
       372 |     int (*)(struct ib_device *, struct rdma_hw_stats *, u32, int) get_hw_stats
       376 |     int (*)(struct ib_device *, u32, unsigned int, bool) modify_hw_stat
       380 |     int (*)(struct sk_buff *, struct ib_mr *) fill_res_mr_entry
       384 |     int (*)(struct sk_buff *, struct ib_mr *) fill_res_mr_entry_raw
       388 |     int (*)(struct sk_buff *, struct ib_cq *) fill_res_cq_entry
       392 |     int (*)(struct sk_buff *, struct ib_cq *) fill_res_cq_entry_raw
       396 |     int (*)(struct sk_buff *, struct ib_qp *) fill_res_qp_entry
       400 |     int (*)(struct sk_buff *, struct ib_qp *) fill_res_qp_entry_raw
       404 |     int (*)(struct sk_buff *, struct rdma_cm_id *) fill_res_cm_id_entry
       408 |     int (*)(struct sk_buff *, struct ib_srq *) fill_res_srq_entry
       412 |     int (*)(struct sk_buff *, struct ib_srq *) fill_res_srq_entry_raw
       416 |     int (*)(struct ib_device *) enable_driver
       420 |     void (*)(struct ib_device *) dealloc_driver
       424 |     void (*)(struct ib_qp *) iw_add_ref
       428 |     void (*)(struct ib_qp *) iw_rem_ref
       432 |     struct ib_qp *(*)(struct ib_device *, int) iw_get_qp
       436 |     int (*)(struct iw_cm_id *, struct iw_cm_conn_param *) iw_connect
       440 |     int (*)(struct iw_cm_id *, struct iw_cm_conn_param *) iw_accept
       444 |     int (*)(struct iw_cm_id *, const void *, u8) iw_reject
       448 |     int (*)(struct iw_cm_id *, int) iw_create_listen
       452 |     int (*)(struct iw_cm_id *) iw_destroy_listen
       456 |     int (*)(struct rdma_counter *, struct ib_qp *) counter_bind_qp
       460 |     int (*)(struct ib_qp *) counter_unbind_qp
       464 |     int (*)(struct rdma_counter *) counter_dealloc
       468 |     struct rdma_hw_stats *(*)(struct rdma_counter *) counter_alloc_stats
       472 |     int (*)(struct rdma_counter *) counter_update_stats
       476 |     int (*)(struct sk_buff *, struct ib_mr *) fill_stat_mr_entry
       480 |     int (*)(struct ib_ucontext *, struct uverbs_attr_bundle *) query_ucontext
       484 |     int (*)(struct ib_device *) get_numa_node
       488 |     struct ib_device *(*)(struct ib_device *, enum rdma_nl_dev_type, const char *) add_sub_dev
       492 |     void (*)(struct ib_device *) del_sub_dev
       496 |     size_t size_ib_ah
       500 |     size_t size_ib_counters
       504 |     size_t size_ib_cq
       508 |     size_t size_ib_mw
       512 |     size_t size_ib_pd
       516 |     size_t size_ib_qp
       520 |     size_t size_ib_rwq_ind_table
       524 |     size_t size_ib_srq
       528 |     size_t size_ib_ucontext
       532 |     size_t size_ib_xrcd
       536 |   char[64] name
       600 |   struct callback_head callback_head
       600 |     struct callback_head * next
       604 |     void (*)(struct callback_head *) func
       608 |   struct list_head event_handler_list
       608 |     struct list_head * next
       612 |     struct list_head * prev
       616 |   struct rw_semaphore event_handler_rwsem
       616 |     atomic_t count
       616 |       int counter
       620 |     atomic_t owner
       620 |       int counter
       624 |     struct raw_spinlock wait_lock
       624 |       arch_spinlock_t raw_lock
       624 |         volatile unsigned int slock
       628 |       unsigned int magic
       632 |       unsigned int owner_cpu
       636 |       void * owner
       640 |       struct lockdep_map dep_map
       640 |         struct lock_class_key * key
       644 |         struct lock_class *[2] class_cache
       652 |         const char * name
       656 |         u8 wait_type_outer
       657 |         u8 wait_type_inner
       658 |         u8 lock_type
       660 |         int cpu
       664 |         unsigned long ip
       668 |     struct list_head wait_list
       668 |       struct list_head * next
       672 |       struct list_head * prev
       676 |     void * magic
       680 |     struct lockdep_map dep_map
       680 |       struct lock_class_key * key
       684 |       struct lock_class *[2] class_cache
       692 |       const char * name
       696 |       u8 wait_type_outer
       697 |       u8 wait_type_inner
       698 |       u8 lock_type
       700 |       int cpu
       704 |       unsigned long ip
       708 |   struct spinlock qp_open_list_lock
       708 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       708 |       struct raw_spinlock rlock
       708 |         arch_spinlock_t raw_lock
       708 |           volatile unsigned int slock
       712 |         unsigned int magic
       716 |         unsigned int owner_cpu
       720 |         void * owner
       724 |         struct lockdep_map dep_map
       724 |           struct lock_class_key * key
       728 |           struct lock_class *[2] class_cache
       736 |           const char * name
       740 |           u8 wait_type_outer
       741 |           u8 wait_type_inner
       742 |           u8 lock_type
       744 |           int cpu
       748 |           unsigned long ip
       708 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       708 |         u8[16] __padding
       724 |         struct lockdep_map dep_map
       724 |           struct lock_class_key * key
       728 |           struct lock_class *[2] class_cache
       736 |           const char * name
       740 |           u8 wait_type_outer
       741 |           u8 wait_type_inner
       742 |           u8 lock_type
       744 |           int cpu
       748 |           unsigned long ip
       752 |   struct rw_semaphore client_data_rwsem
       752 |     atomic_t count
       752 |       int counter
       756 |     atomic_t owner
       756 |       int counter
       760 |     struct raw_spinlock wait_lock
       760 |       arch_spinlock_t raw_lock
       760 |         volatile unsigned int slock
       764 |       unsigned int magic
       768 |       unsigned int owner_cpu
       772 |       void * owner
       776 |       struct lockdep_map dep_map
       776 |         struct lock_class_key * key
       780 |         struct lock_class *[2] class_cache
       788 |         const char * name
       792 |         u8 wait_type_outer
       793 |         u8 wait_type_inner
       794 |         u8 lock_type
       796 |         int cpu
       800 |         unsigned long ip
       804 |     struct list_head wait_list
       804 |       struct list_head * next
       808 |       struct list_head * prev
       812 |     void * magic
       816 |     struct lockdep_map dep_map
       816 |       struct lock_class_key * key
       820 |       struct lock_class *[2] class_cache
       828 |       const char * name
       832 |       u8 wait_type_outer
       833 |       u8 wait_type_inner
       834 |       u8 lock_type
       836 |       int cpu
       840 |       unsigned long ip
       844 |   struct xarray client_data
       844 |     struct spinlock xa_lock
       844 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       844 |         struct raw_spinlock rlock
       844 |           arch_spinlock_t raw_lock
       844 |             volatile unsigned int slock
       848 |           unsigned int magic
       852 |           unsigned int owner_cpu
       856 |           void * owner
       860 |           struct lockdep_map dep_map
       860 |             struct lock_class_key * key
       864 |             struct lock_class *[2] class_cache
       872 |             const char * name
       876 |             u8 wait_type_outer
       877 |             u8 wait_type_inner
       878 |             u8 lock_type
       880 |             int cpu
       884 |             unsigned long ip
       844 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       844 |           u8[16] __padding
       860 |           struct lockdep_map dep_map
       860 |             struct lock_class_key * key
       864 |             struct lock_class *[2] class_cache
       872 |             const char * name
       876 |             u8 wait_type_outer
       877 |             u8 wait_type_inner
       878 |             u8 lock_type
       880 |             int cpu
       884 |             unsigned long ip
       888 |     gfp_t xa_flags
       892 |     void * xa_head
       896 |   struct mutex unregistration_lock
       896 |     atomic_t owner
       896 |       int counter
       900 |     struct raw_spinlock wait_lock
       900 |       arch_spinlock_t raw_lock
       900 |         volatile unsigned int slock
       904 |       unsigned int magic
       908 |       unsigned int owner_cpu
       912 |       void * owner
       916 |       struct lockdep_map dep_map
       916 |         struct lock_class_key * key
       920 |         struct lock_class *[2] class_cache
       928 |         const char * name
       932 |         u8 wait_type_outer
       933 |         u8 wait_type_inner
       934 |         u8 lock_type
       936 |         int cpu
       940 |         unsigned long ip
       944 |     struct list_head wait_list
       944 |       struct list_head * next
       948 |       struct list_head * prev
       952 |     void * magic
       956 |     struct lockdep_map dep_map
       956 |       struct lock_class_key * key
       960 |       struct lock_class *[2] class_cache
       968 |       const char * name
       972 |       u8 wait_type_outer
       973 |       u8 wait_type_inner
       974 |       u8 lock_type
       976 |       int cpu
       980 |       unsigned long ip
       984 |   rwlock_t cache_lock
       984 |     arch_rwlock_t raw_lock
       984 |     unsigned int magic
       988 |     unsigned int owner_cpu
       992 |     void * owner
       996 |     struct lockdep_map dep_map
       996 |       struct lock_class_key * key
      1000 |       struct lock_class *[2] class_cache
      1008 |       const char * name
      1012 |       u8 wait_type_outer
      1013 |       u8 wait_type_inner
      1014 |       u8 lock_type
      1016 |       int cpu
      1020 |       unsigned long ip
      1024 |   struct ib_port_data * port_data
      1028 |   int num_comp_vectors
      1032 |   union ib_device::(anonymous at ../include/rdma/ib_verbs.h:2729:2) 
      1032 |     struct device dev
      1032 |       struct kobject kobj
      1032 |         const char * name
      1036 |         struct list_head entry
      1036 |           struct list_head * next
      1040 |           struct list_head * prev
      1044 |         struct kobject * parent
      1048 |         struct kset * kset
      1052 |         const struct kobj_type * ktype
      1056 |         struct kernfs_node * sd
      1060 |         struct kref kref
      1060 |           struct refcount_struct refcount
      1060 |             atomic_t refs
      1060 |               int counter
  1064:0-0 |         unsigned int state_initialized
  1064:1-1 |         unsigned int state_in_sysfs
  1064:2-2 |         unsigned int state_add_uevent_sent
  1064:3-3 |         unsigned int state_remove_uevent_sent
  1064:4-4 |         unsigned int uevent_suppress
      1068 |         struct delayed_work release
      1068 |           struct work_struct work
      1068 |             atomic_t data
      1068 |               int counter
      1072 |             struct list_head entry
      1072 |               struct list_head * next
      1076 |               struct list_head * prev
      1080 |             work_func_t func
      1084 |             struct lockdep_map lockdep_map
      1084 |               struct lock_class_key * key
      1088 |               struct lock_class *[2] class_cache
      1096 |               const char * name
      1100 |               u8 wait_type_outer
      1101 |               u8 wait_type_inner
      1102 |               u8 lock_type
      1104 |               int cpu
      1108 |               unsigned long ip
      1112 |           struct timer_list timer
      1112 |             struct hlist_node entry
      1112 |               struct hlist_node * next
      1116 |               struct hlist_node ** pprev
      1120 |             unsigned long expires
      1124 |             void (*)(struct timer_list *) function
      1128 |             u32 flags
      1132 |             struct lockdep_map lockdep_map
      1132 |               struct lock_class_key * key
      1136 |               struct lock_class *[2] class_cache
      1144 |               const char * name
      1148 |               u8 wait_type_outer
      1149 |               u8 wait_type_inner
      1150 |               u8 lock_type
      1152 |               int cpu
      1156 |               unsigned long ip
      1160 |           struct workqueue_struct * wq
      1164 |           int cpu
      1168 |       struct device * parent
      1172 |       struct device_private * p
      1176 |       const char * init_name
      1180 |       const struct device_type * type
      1184 |       const struct bus_type * bus
      1188 |       struct device_driver * driver
      1192 |       void * platform_data
      1196 |       void * driver_data
      1200 |       struct mutex mutex
      1200 |         atomic_t owner
      1200 |           int counter
      1204 |         struct raw_spinlock wait_lock
      1204 |           arch_spinlock_t raw_lock
      1204 |             volatile unsigned int slock
      1208 |           unsigned int magic
      1212 |           unsigned int owner_cpu
      1216 |           void * owner
      1220 |           struct lockdep_map dep_map
      1220 |             struct lock_class_key * key
      1224 |             struct lock_class *[2] class_cache
      1232 |             const char * name
      1236 |             u8 wait_type_outer
      1237 |             u8 wait_type_inner
      1238 |             u8 lock_type
      1240 |             int cpu
      1244 |             unsigned long ip
      1248 |         struct list_head wait_list
      1248 |           struct list_head * next
      1252 |           struct list_head * prev
      1256 |         void * magic
      1260 |         struct lockdep_map dep_map
      1260 |           struct lock_class_key * key
      1264 |           struct lock_class *[2] class_cache
      1272 |           const char * name
      1276 |           u8 wait_type_outer
      1277 |           u8 wait_type_inner
      1278 |           u8 lock_type
      1280 |           int cpu
      1284 |           unsigned long ip
      1288 |       struct dev_links_info links
      1288 |         struct list_head suppliers
      1288 |           struct list_head * next
      1292 |           struct list_head * prev
      1296 |         struct list_head consumers
      1296 |           struct list_head * next
      1300 |           struct list_head * prev
      1304 |         struct list_head defer_sync
      1304 |           struct list_head * next
      1308 |           struct list_head * prev
      1312 |         enum dl_dev_state status
      1316 |       struct dev_pm_info power
      1316 |         struct pm_message power_state
      1316 |           int event
  1320:0-0 |         bool can_wakeup
  1320:1-1 |         bool async_suspend
  1320:2-2 |         bool in_dpm_list
  1320:3-3 |         bool is_prepared
  1320:4-4 |         bool is_suspended
  1320:5-5 |         bool is_noirq_suspended
  1320:6-6 |         bool is_late_suspended
  1320:7-7 |         bool no_pm
  1321:0-0 |         bool early_init
  1321:1-1 |         bool direct_complete
      1324 |         u32 driver_flags
      1328 |         struct spinlock lock
      1328 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1328 |             struct raw_spinlock rlock
      1328 |               arch_spinlock_t raw_lock
      1328 |                 volatile unsigned int slock
      1332 |               unsigned int magic
      1336 |               unsigned int owner_cpu
      1340 |               void * owner
      1344 |               struct lockdep_map dep_map
      1344 |                 struct lock_class_key * key
      1348 |                 struct lock_class *[2] class_cache
      1356 |                 const char * name
      1360 |                 u8 wait_type_outer
      1361 |                 u8 wait_type_inner
      1362 |                 u8 lock_type
      1364 |                 int cpu
      1368 |                 unsigned long ip
      1328 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1328 |               u8[16] __padding
      1344 |               struct lockdep_map dep_map
      1344 |                 struct lock_class_key * key
      1348 |                 struct lock_class *[2] class_cache
      1356 |                 const char * name
      1360 |                 u8 wait_type_outer
      1361 |                 u8 wait_type_inner
      1362 |                 u8 lock_type
      1364 |                 int cpu
      1368 |                 unsigned long ip
  1372:0-0 |         bool should_wakeup
      1376 |         struct pm_subsys_data * subsys_data
      1380 |         void (*)(struct device *, s32) set_latency_tolerance
      1384 |         struct dev_pm_qos * qos
      1388 |       struct dev_pm_domain * pm_domain
      1392 |       struct dev_msi_info msi
      1392 |       u64 * dma_mask
      1400 |       u64 coherent_dma_mask
      1408 |       u64 bus_dma_limit
      1416 |       const struct bus_dma_region * dma_range_map
      1420 |       struct device_dma_parameters * dma_parms
      1424 |       struct list_head dma_pools
      1424 |         struct list_head * next
      1428 |         struct list_head * prev
      1432 |       struct dma_coherent_mem * dma_mem
      1436 |       struct dev_archdata archdata
      1436 |       struct device_node * of_node
      1440 |       struct fwnode_handle * fwnode
      1444 |       dev_t devt
      1448 |       u32 id
      1452 |       struct spinlock devres_lock
      1452 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1452 |           struct raw_spinlock rlock
      1452 |             arch_spinlock_t raw_lock
      1452 |               volatile unsigned int slock
      1456 |             unsigned int magic
      1460 |             unsigned int owner_cpu
      1464 |             void * owner
      1468 |             struct lockdep_map dep_map
      1468 |               struct lock_class_key * key
      1472 |               struct lock_class *[2] class_cache
      1480 |               const char * name
      1484 |               u8 wait_type_outer
      1485 |               u8 wait_type_inner
      1486 |               u8 lock_type
      1488 |               int cpu
      1492 |               unsigned long ip
      1452 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1452 |             u8[16] __padding
      1468 |             struct lockdep_map dep_map
      1468 |               struct lock_class_key * key
      1472 |               struct lock_class *[2] class_cache
      1480 |               const char * name
      1484 |               u8 wait_type_outer
      1485 |               u8 wait_type_inner
      1486 |               u8 lock_type
      1488 |               int cpu
      1492 |               unsigned long ip
      1496 |       struct list_head devres_head
      1496 |         struct list_head * next
      1500 |         struct list_head * prev
      1504 |       const struct class * class
      1508 |       const struct attribute_group ** groups
      1512 |       void (*)(struct device *) release
      1516 |       struct iommu_group * iommu_group
      1520 |       struct dev_iommu * iommu
      1524 |       struct device_physical_location * physical_location
      1528 |       enum device_removable removable
  1532:0-0 |       bool offline_disabled
  1532:1-1 |       bool offline
  1532:2-2 |       bool of_node_reused
  1532:3-3 |       bool state_synced
  1532:4-4 |       bool can_match
  1532:5-5 |       bool dma_coherent
  1532:6-6 |       bool dma_skip_sync
      1032 |     struct ib_core_device coredev
      1032 |       struct device dev
      1032 |         struct kobject kobj
      1032 |           const char * name
      1036 |           struct list_head entry
      1036 |             struct list_head * next
      1040 |             struct list_head * prev
      1044 |           struct kobject * parent
      1048 |           struct kset * kset
      1052 |           const struct kobj_type * ktype
      1056 |           struct kernfs_node * sd
      1060 |           struct kref kref
      1060 |             struct refcount_struct refcount
      1060 |               atomic_t refs
      1060 |                 int counter
  1064:0-0 |           unsigned int state_initialized
  1064:1-1 |           unsigned int state_in_sysfs
  1064:2-2 |           unsigned int state_add_uevent_sent
  1064:3-3 |           unsigned int state_remove_uevent_sent
  1064:4-4 |           unsigned int uevent_suppress
      1068 |           struct delayed_work release
      1068 |             struct work_struct work
      1068 |               atomic_t data
      1068 |                 int counter
      1072 |               struct list_head entry
      1072 |                 struct list_head * next
      1076 |                 struct list_head * prev
      1080 |               work_func_t func
      1084 |               struct lockdep_map lockdep_map
      1084 |                 struct lock_class_key * key
      1088 |                 struct lock_class *[2] class_cache
      1096 |                 const char * name
      1100 |                 u8 wait_type_outer
      1101 |                 u8 wait_type_inner
      1102 |                 u8 lock_type
      1104 |                 int cpu
      1108 |                 unsigned long ip
      1112 |             struct timer_list timer
      1112 |               struct hlist_node entry
      1112 |                 struct hlist_node * next
      1116 |                 struct hlist_node ** pprev
      1120 |               unsigned long expires
      1124 |               void (*)(struct timer_list *) function
      1128 |               u32 flags
      1132 |               struct lockdep_map lockdep_map
      1132 |                 struct lock_class_key * key
      1136 |                 struct lock_class *[2] class_cache
      1144 |                 const char * name
      1148 |                 u8 wait_type_outer
      1149 |                 u8 wait_type_inner
      1150 |                 u8 lock_type
      1152 |                 int cpu
      1156 |                 unsigned long ip
      1160 |             struct workqueue_struct * wq
      1164 |             int cpu
      1168 |         struct device * parent
      1172 |         struct device_private * p
      1176 |         const char * init_name
      1180 |         const struct device_type * type
      1184 |         const struct bus_type * bus
      1188 |         struct device_driver * driver
      1192 |         void * platform_data
      1196 |         void * driver_data
      1200 |         struct mutex mutex
      1200 |           atomic_t owner
      1200 |             int counter
      1204 |           struct raw_spinlock wait_lock
      1204 |             arch_spinlock_t raw_lock
      1204 |               volatile unsigned int slock
      1208 |             unsigned int magic
      1212 |             unsigned int owner_cpu
      1216 |             void * owner
      1220 |             struct lockdep_map dep_map
      1220 |               struct lock_class_key * key
      1224 |               struct lock_class *[2] class_cache
      1232 |               const char * name
      1236 |               u8 wait_type_outer
      1237 |               u8 wait_type_inner
      1238 |               u8 lock_type
      1240 |               int cpu
      1244 |               unsigned long ip
      1248 |           struct list_head wait_list
      1248 |             struct list_head * next
      1252 |             struct list_head * prev
      1256 |           void * magic
      1260 |           struct lockdep_map dep_map
      1260 |             struct lock_class_key * key
      1264 |             struct lock_class *[2] class_cache
      1272 |             const char * name
      1276 |             u8 wait_type_outer
      1277 |             u8 wait_type_inner
      1278 |             u8 lock_type
      1280 |             int cpu
      1284 |             unsigned long ip
      1288 |         struct dev_links_info links
      1288 |           struct list_head suppliers
      1288 |             struct list_head * next
      1292 |             struct list_head * prev
      1296 |           struct list_head consumers
      1296 |             struct list_head * next
      1300 |             struct list_head * prev
      1304 |           struct list_head defer_sync
      1304 |             struct list_head * next
      1308 |             struct list_head * prev
      1312 |           enum dl_dev_state status
      1316 |         struct dev_pm_info power
      1316 |           struct pm_message power_state
      1316 |             int event
  1320:0-0 |           bool can_wakeup
  1320:1-1 |           bool async_suspend
  1320:2-2 |           bool in_dpm_list
  1320:3-3 |           bool is_prepared
  1320:4-4 |           bool is_suspended
  1320:5-5 |           bool is_noirq_suspended
  1320:6-6 |           bool is_late_suspended
  1320:7-7 |           bool no_pm
  1321:0-0 |           bool early_init
  1321:1-1 |           bool direct_complete
      1324 |           u32 driver_flags
      1328 |           struct spinlock lock
      1328 |             union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1328 |               struct raw_spinlock rlock
      1328 |                 arch_spinlock_t raw_lock
      1328 |                   volatile unsigned int slock
      1332 |                 unsigned int magic
      1336 |                 unsigned int owner_cpu
      1340 |                 void * owner
      1344 |                 struct lockdep_map dep_map
      1344 |                   struct lock_class_key * key
      1348 |                   struct lock_class *[2] class_cache
      1356 |                   const char * name
      1360 |                   u8 wait_type_outer
      1361 |                   u8 wait_type_inner
      1362 |                   u8 lock_type
      1364 |                   int cpu
      1368 |                   unsigned long ip
      1328 |               struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1328 |                 u8[16] __padding
      1344 |                 struct lockdep_map dep_map
      1344 |                   struct lock_class_key * key
      1348 |                   struct lock_class *[2] class_cache
      1356 |                   const char * name
      1360 |                   u8 wait_type_outer
      1361 |                   u8 wait_type_inner
      1362 |                   u8 lock_type
      1364 |                   int cpu
      1368 |                   unsigned long ip
  1372:0-0 |           bool should_wakeup
      1376 |           struct pm_subsys_data * subsys_data
      1380 |           void (*)(struct device *, s32) set_latency_tolerance
      1384 |           struct dev_pm_qos * qos
      1388 |         struct dev_pm_domain * pm_domain
      1392 |         struct dev_msi_info msi
      1392 |         u64 * dma_mask
      1400 |         u64 coherent_dma_mask
      1408 |         u64 bus_dma_limit
      1416 |         const struct bus_dma_region * dma_range_map
      1420 |         struct device_dma_parameters * dma_parms
      1424 |         struct list_head dma_pools
      1424 |           struct list_head * next
      1428 |           struct list_head * prev
      1432 |         struct dma_coherent_mem * dma_mem
      1436 |         struct dev_archdata archdata
      1436 |         struct device_node * of_node
      1440 |         struct fwnode_handle * fwnode
      1444 |         dev_t devt
      1448 |         u32 id
      1452 |         struct spinlock devres_lock
      1452 |           union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1452 |             struct raw_spinlock rlock
      1452 |               arch_spinlock_t raw_lock
      1452 |                 volatile unsigned int slock
      1456 |               unsigned int magic
      1460 |               unsigned int owner_cpu
      1464 |               void * owner
      1468 |               struct lockdep_map dep_map
      1468 |                 struct lock_class_key * key
      1472 |                 struct lock_class *[2] class_cache
      1480 |                 const char * name
      1484 |                 u8 wait_type_outer
      1485 |                 u8 wait_type_inner
      1486 |                 u8 lock_type
      1488 |                 int cpu
      1492 |                 unsigned long ip
      1452 |             struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1452 |               u8[16] __padding
      1468 |               struct lockdep_map dep_map
      1468 |                 struct lock_class_key * key
      1472 |                 struct lock_class *[2] class_cache
      1480 |                 const char * name
      1484 |                 u8 wait_type_outer
      1485 |                 u8 wait_type_inner
      1486 |                 u8 lock_type
      1488 |                 int cpu
      1492 |                 unsigned long ip
      1496 |         struct list_head devres_head
      1496 |           struct list_head * next
      1500 |           struct list_head * prev
      1504 |         const struct class * class
      1508 |         const struct attribute_group ** groups
      1512 |         void (*)(struct device *) release
      1516 |         struct iommu_group * iommu_group
      1520 |         struct dev_iommu * iommu
      1524 |         struct device_physical_location * physical_location
      1528 |         enum device_removable removable
  1532:0-0 |         bool offline_disabled
  1532:1-1 |         bool offline
  1532:2-2 |         bool of_node_reused
  1532:3-3 |         bool state_synced
  1532:4-4 |         bool can_match
  1532:5-5 |         bool dma_coherent
  1532:6-6 |         bool dma_skip_sync
      1536 |       possible_net_t rdma_net
      1536 |         struct net * net
      1540 |       struct kobject * ports_kobj
      1544 |       struct list_head port_list
      1544 |         struct list_head * next
      1548 |         struct list_head * prev
      1552 |       struct ib_device * owner
      1560 |   const struct attribute_group *[4] groups
      1576 |   u64 uverbs_cmd_mask
      1584 |   char[64] node_desc
      1648 |   __be64 node_guid
      1656 |   u32 local_dma_lkey
  1660:0-0 |   u16 is_switch
  1660:1-1 |   u16 kverbs_provider
  1660:2-2 |   u16 use_cq_dim
      1661 |   u8 node_type
      1664 |   u32 phys_port_cnt
      1672 |   struct ib_device_attr attrs
      1672 |     u64 fw_ver
      1680 |     __be64 sys_image_guid
      1688 |     u64 max_mr_size
      1696 |     u64 page_size_cap
      1704 |     u32 vendor_id
      1708 |     u32 vendor_part_id
      1712 |     u32 hw_ver
      1716 |     int max_qp
      1720 |     int max_qp_wr
      1728 |     u64 device_cap_flags
      1736 |     u64 kernel_cap_flags
      1744 |     int max_send_sge
      1748 |     int max_recv_sge
      1752 |     int max_sge_rd
      1756 |     int max_cq
      1760 |     int max_cqe
      1764 |     int max_mr
      1768 |     int max_pd
      1772 |     int max_qp_rd_atom
      1776 |     int max_ee_rd_atom
      1780 |     int max_res_rd_atom
      1784 |     int max_qp_init_rd_atom
      1788 |     int max_ee_init_rd_atom
      1792 |     enum ib_atomic_cap atomic_cap
      1796 |     enum ib_atomic_cap masked_atomic_cap
      1800 |     int max_ee
      1804 |     int max_rdd
      1808 |     int max_mw
      1812 |     int max_raw_ipv6_qp
      1816 |     int max_raw_ethy_qp
      1820 |     int max_mcast_grp
      1824 |     int max_mcast_qp_attach
      1828 |     int max_total_mcast_qp_attach
      1832 |     int max_ah
      1836 |     int max_srq
      1840 |     int max_srq_wr
      1844 |     int max_srq_sge
      1848 |     unsigned int max_fast_reg_page_list_len
      1852 |     unsigned int max_pi_fast_reg_page_list_len
      1856 |     u16 max_pkeys
      1858 |     u8 local_ca_ack_delay
      1860 |     int sig_prot_cap
      1864 |     int sig_guard_cap
      1872 |     struct ib_odp_caps odp_caps
      1872 |       uint64_t general_caps
      1880 |       struct ib_odp_caps::(unnamed at ../include/rdma/ib_verbs.h:335:2) per_transport_caps
      1880 |         uint32_t rc_odp_caps
      1884 |         uint32_t uc_odp_caps
      1888 |         uint32_t ud_odp_caps
      1892 |         uint32_t xrc_odp_caps
      1896 |     uint64_t timestamp_mask
      1904 |     uint64_t hca_core_clock
      1912 |     struct ib_rss_caps rss_caps
      1912 |       u32 supported_qpts
      1916 |       u32 max_rwq_indirection_tables
      1920 |       u32 max_rwq_indirection_table_size
      1924 |     u32 max_wq_type_rq
      1928 |     u32 raw_packet_caps
      1932 |     struct ib_tm_caps tm_caps
      1932 |       u32 max_rndv_hdr_size
      1936 |       u32 max_num_tags
      1940 |       u32 flags
      1944 |       u32 max_ops
      1948 |       u32 max_sge
      1952 |     struct ib_cq_caps cq_caps
      1952 |       u16 max_cq_moderation_count
      1954 |       u16 max_cq_moderation_period
      1960 |     u64 max_dm_size
      1968 |     u32 max_sgl_rd
      1976 |   struct hw_stats_device_data * hw_stats_data
      1980 |   u32 index
      1984 |   struct spinlock cq_pools_lock
      1984 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      1984 |       struct raw_spinlock rlock
      1984 |         arch_spinlock_t raw_lock
      1984 |           volatile unsigned int slock
      1988 |         unsigned int magic
      1992 |         unsigned int owner_cpu
      1996 |         void * owner
      2000 |         struct lockdep_map dep_map
      2000 |           struct lock_class_key * key
      2004 |           struct lock_class *[2] class_cache
      2012 |           const char * name
      2016 |           u8 wait_type_outer
      2017 |           u8 wait_type_inner
      2018 |           u8 lock_type
      2020 |           int cpu
      2024 |           unsigned long ip
      1984 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      1984 |         u8[16] __padding
      2000 |         struct lockdep_map dep_map
      2000 |           struct lock_class_key * key
      2004 |           struct lock_class *[2] class_cache
      2012 |           const char * name
      2016 |           u8 wait_type_outer
      2017 |           u8 wait_type_inner
      2018 |           u8 lock_type
      2020 |           int cpu
      2024 |           unsigned long ip
      2028 |   struct list_head[3] cq_pools
      2052 |   struct rdma_restrack_root * res
      2056 |   const struct uapi_definition * driver_def
      2060 |   struct refcount_struct refcount
      2060 |     atomic_t refs
      2060 |       int counter
      2064 |   struct completion unreg_completion
      2064 |     unsigned int done
      2068 |     struct swait_queue_head wait
      2068 |       struct raw_spinlock lock
      2068 |         arch_spinlock_t raw_lock
      2068 |           volatile unsigned int slock
      2072 |         unsigned int magic
      2076 |         unsigned int owner_cpu
      2080 |         void * owner
      2084 |         struct lockdep_map dep_map
      2084 |           struct lock_class_key * key
      2088 |           struct lock_class *[2] class_cache
      2096 |           const char * name
      2100 |           u8 wait_type_outer
      2101 |           u8 wait_type_inner
      2102 |           u8 lock_type
      2104 |           int cpu
      2108 |           unsigned long ip
      2112 |       struct list_head task_list
      2112 |         struct list_head * next
      2116 |         struct list_head * prev
      2120 |   struct work_struct unregistration_work
      2120 |     atomic_t data
      2120 |       int counter
      2124 |     struct list_head entry
      2124 |       struct list_head * next
      2128 |       struct list_head * prev
      2132 |     work_func_t func
      2136 |     struct lockdep_map lockdep_map
      2136 |       struct lock_class_key * key
      2140 |       struct lock_class *[2] class_cache
      2148 |       const char * name
      2152 |       u8 wait_type_outer
      2153 |       u8 wait_type_inner
      2154 |       u8 lock_type
      2156 |       int cpu
      2160 |       unsigned long ip
      2164 |   const struct rdma_link_ops * link_ops
      2168 |   struct mutex compat_devs_mutex
      2168 |     atomic_t owner
      2168 |       int counter
      2172 |     struct raw_spinlock wait_lock
      2172 |       arch_spinlock_t raw_lock
      2172 |         volatile unsigned int slock
      2176 |       unsigned int magic
      2180 |       unsigned int owner_cpu
      2184 |       void * owner
      2188 |       struct lockdep_map dep_map
      2188 |         struct lock_class_key * key
      2192 |         struct lock_class *[2] class_cache
      2200 |         const char * name
      2204 |         u8 wait_type_outer
      2205 |         u8 wait_type_inner
      2206 |         u8 lock_type
      2208 |         int cpu
      2212 |         unsigned long ip
      2216 |     struct list_head wait_list
      2216 |       struct list_head * next
      2220 |       struct list_head * prev
      2224 |     void * magic
      2228 |     struct lockdep_map dep_map
      2228 |       struct lock_class_key * key
      2232 |       struct lock_class *[2] class_cache
      2240 |       const char * name
      2244 |       u8 wait_type_outer
      2245 |       u8 wait_type_inner
      2246 |       u8 lock_type
      2248 |       int cpu
      2252 |       unsigned long ip
      2256 |   struct xarray compat_devs
      2256 |     struct spinlock xa_lock
      2256 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
      2256 |         struct raw_spinlock rlock
      2256 |           arch_spinlock_t raw_lock
      2256 |             volatile unsigned int slock
      2260 |           unsigned int magic
      2264 |           unsigned int owner_cpu
      2268 |           void * owner
      2272 |           struct lockdep_map dep_map
      2272 |             struct lock_class_key * key
      2276 |             struct lock_class *[2] class_cache
      2284 |             const char * name
      2288 |             u8 wait_type_outer
      2289 |             u8 wait_type_inner
      2290 |             u8 lock_type
      2292 |             int cpu
      2296 |             unsigned long ip
      2256 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
      2256 |           u8[16] __padding
      2272 |           struct lockdep_map dep_map
      2272 |             struct lock_class_key * key
      2276 |             struct lock_class *[2] class_cache
      2284 |             const char * name
      2288 |             u8 wait_type_outer
      2289 |             u8 wait_type_inner
      2290 |             u8 lock_type
      2292 |             int cpu
      2296 |             unsigned long ip
      2300 |     gfp_t xa_flags
      2304 |     void * xa_head
      2308 |   char[16] iw_ifname
      2324 |   u32 iw_driver_flags
      2328 |   u32 lag_flags
      2332 |   struct mutex subdev_lock
      2332 |     atomic_t owner
      2332 |       int counter
      2336 |     struct raw_spinlock wait_lock
      2336 |       arch_spinlock_t raw_lock
      2336 |         volatile unsigned int slock
      2340 |       unsigned int magic
      2344 |       unsigned int owner_cpu
      2348 |       void * owner
      2352 |       struct lockdep_map dep_map
      2352 |         struct lock_class_key * key
      2356 |         struct lock_class *[2] class_cache
      2364 |         const char * name
      2368 |         u8 wait_type_outer
      2369 |         u8 wait_type_inner
      2370 |         u8 lock_type
      2372 |         int cpu
      2376 |         unsigned long ip
      2380 |     struct list_head wait_list
      2380 |       struct list_head * next
      2384 |       struct list_head * prev
      2388 |     void * magic
      2392 |     struct lockdep_map dep_map
      2392 |       struct lock_class_key * key
      2396 |       struct lock_class *[2] class_cache
      2404 |       const char * name
      2408 |       u8 wait_type_outer
      2409 |       u8 wait_type_inner
      2410 |       u8 lock_type
      2412 |       int cpu
      2416 |       unsigned long ip
      2420 |   struct list_head subdev_list_head
      2420 |     struct list_head * next
      2424 |     struct list_head * prev
      2428 |   enum rdma_nl_dev_type type
      2432 |   struct ib_device * parent
      2436 |   struct list_head subdev_list
      2436 |     struct list_head * next
      2440 |     struct list_head * prev
      2444 |   enum rdma_nl_name_assign_type name_assign_type
           | [sizeof=2448, align=8]

*** Dumping AST Record Layout
         0 | struct ib_rdmacg_object
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct ib_ucontext
         0 |   struct ib_device * device
         4 |   struct ib_uverbs_file * ufile
         8 |   struct ib_rdmacg_object cg_obj
         8 |   struct rdma_restrack_entry res
         8 |     bool valid
     9:0-0 |     u8 no_track
        12 |     struct kref kref
        12 |       struct refcount_struct refcount
        12 |         atomic_t refs
        12 |           int counter
        16 |     struct completion comp
        16 |       unsigned int done
        20 |       struct swait_queue_head wait
        20 |         struct raw_spinlock lock
        20 |           arch_spinlock_t raw_lock
        20 |             volatile unsigned int slock
        24 |           unsigned int magic
        28 |           unsigned int owner_cpu
        32 |           void * owner
        36 |           struct lockdep_map dep_map
        36 |             struct lock_class_key * key
        40 |             struct lock_class *[2] class_cache
        48 |             const char * name
        52 |             u8 wait_type_outer
        53 |             u8 wait_type_inner
        54 |             u8 lock_type
        56 |             int cpu
        60 |             unsigned long ip
        64 |         struct list_head task_list
        64 |           struct list_head * next
        68 |           struct list_head * prev
        72 |     struct task_struct * task
        76 |     const char * kern_name
        80 |     enum rdma_restrack_type type
        84 |     bool user
        88 |     u32 id
        92 |   struct xarray mmap_xa
        92 |     struct spinlock xa_lock
        92 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        92 |         struct raw_spinlock rlock
        92 |           arch_spinlock_t raw_lock
        92 |             volatile unsigned int slock
        96 |           unsigned int magic
       100 |           unsigned int owner_cpu
       104 |           void * owner
       108 |           struct lockdep_map dep_map
       108 |             struct lock_class_key * key
       112 |             struct lock_class *[2] class_cache
       120 |             const char * name
       124 |             u8 wait_type_outer
       125 |             u8 wait_type_inner
       126 |             u8 lock_type
       128 |             int cpu
       132 |             unsigned long ip
        92 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        92 |           u8[16] __padding
       108 |           struct lockdep_map dep_map
       108 |             struct lock_class_key * key
       112 |             struct lock_class *[2] class_cache
       120 |             const char * name
       124 |             u8 wait_type_outer
       125 |             u8 wait_type_inner
       126 |             u8 lock_type
       128 |             int cpu
       132 |             unsigned long ip
       136 |     gfp_t xa_flags
       140 |     void * xa_head
           | [sizeof=144, align=4]

*** Dumping AST Record Layout
         0 | struct rdma_user_mmap_entry
         0 |   struct kref ref
         0 |     struct refcount_struct refcount
         0 |       atomic_t refs
         0 |         int counter
         4 |   struct ib_ucontext * ucontext
         8 |   unsigned long start_pgoff
        12 |   size_t npages
        16 |   bool driver_removed
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ib_pd
         0 |   u32 local_dma_lkey
         4 |   u32 flags
         8 |   struct ib_device * device
        12 |   struct ib_uobject * uobject
        16 |   atomic_t usecnt
        16 |     int counter
        20 |   u32 unsafe_global_rkey
        24 |   struct ib_mr * __internal_mr
        28 |   struct rdma_restrack_entry res
        28 |     bool valid
    29:0-0 |     u8 no_track
        32 |     struct kref kref
        32 |       struct refcount_struct refcount
        32 |         atomic_t refs
        32 |           int counter
        36 |     struct completion comp
        36 |       unsigned int done
        40 |       struct swait_queue_head wait
        40 |         struct raw_spinlock lock
        40 |           arch_spinlock_t raw_lock
        40 |             volatile unsigned int slock
        44 |           unsigned int magic
        48 |           unsigned int owner_cpu
        52 |           void * owner
        56 |           struct lockdep_map dep_map
        56 |             struct lock_class_key * key
        60 |             struct lock_class *[2] class_cache
        68 |             const char * name
        72 |             u8 wait_type_outer
        73 |             u8 wait_type_inner
        74 |             u8 lock_type
        76 |             int cpu
        80 |             unsigned long ip
        84 |         struct list_head task_list
        84 |           struct list_head * next
        88 |           struct list_head * prev
        92 |     struct task_struct * task
        96 |     const char * kern_name
       100 |     enum rdma_restrack_type type
       104 |     bool user
       108 |     u32 id
           | [sizeof=112, align=4]

*** Dumping AST Record Layout
         0 | struct ib_udata
         0 |   const void * inbuf
         4 |   void * outbuf
         8 |   size_t inlen
        12 |   size_t outlen
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_ah
         0 |   struct ib_device * device
         4 |   struct ib_pd * pd
         8 |   struct ib_uobject * uobject
        12 |   const struct ib_gid_attr * sgid_attr
        16 |   enum rdma_ah_attr_type type
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq_attr
         0 |   u32 max_wr
         4 |   u32 max_sge
         8 |   u32 srq_limit
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1083:4)
         0 |   u32 max_num_tags
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | union ib_srq_init_attr::(anonymous at ../include/rdma/ib_verbs.h:1078:3)
         0 |   struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1079:4) xrc
         0 |     struct ib_xrcd * xrcd
         0 |   struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1083:4) tag_matching
         0 |     u32 max_num_tags
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1076:2)
         0 |   struct ib_cq * cq
         4 |   union ib_srq_init_attr::(anonymous at ../include/rdma/ib_verbs.h:1078:3) 
         4 |     struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1079:4) xrc
         4 |       struct ib_xrcd * xrcd
         4 |     struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1083:4) tag_matching
         4 |       u32 max_num_tags
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq_init_attr
         0 |   void (*)(struct ib_event *, void *) event_handler
         4 |   void * srq_context
         8 |   struct ib_srq_attr attr
         8 |     u32 max_wr
        12 |     u32 max_sge
        16 |     u32 srq_limit
        20 |   enum ib_srq_type srq_type
        24 |   struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1076:2) ext
        24 |     struct ib_cq * cq
        28 |     union ib_srq_init_attr::(anonymous at ../include/rdma/ib_verbs.h:1078:3) 
        28 |       struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1079:4) xrc
        28 |         struct ib_xrcd * xrcd
        28 |       struct ib_srq_init_attr::(unnamed at ../include/rdma/ib_verbs.h:1083:4) tag_matching
        28 |         u32 max_num_tags
           | [sizeof=32, align=4]

*** Dumping AST Record Layout
         0 | union ib_srq::(anonymous at ../include/rdma/ib_verbs.h:1635:3)
         0 |   struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1636:4) xrc
         0 |     struct ib_xrcd * xrcd
         4 |     u32 srq_num
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1633:2)
         0 |   struct ib_cq * cq
         4 |   union ib_srq::(anonymous at ../include/rdma/ib_verbs.h:1635:3) 
         4 |     struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1636:4) xrc
         4 |       struct ib_xrcd * xrcd
         8 |       u32 srq_num
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_srq
         0 |   struct ib_device * device
         4 |   struct ib_pd * pd
         8 |   struct ib_usrq_object * uobject
        12 |   void (*)(struct ib_event *, void *) event_handler
        16 |   void * srq_context
        20 |   enum ib_srq_type srq_type
        24 |   atomic_t usecnt
        24 |     int counter
        28 |   struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1633:2) ext
        28 |     struct ib_cq * cq
        32 |     union ib_srq::(anonymous at ../include/rdma/ib_verbs.h:1635:3) 
        32 |       struct ib_srq::(unnamed at ../include/rdma/ib_verbs.h:1636:4) xrc
        32 |         struct ib_xrcd * xrcd
        36 |         u32 srq_num
        40 |   struct rdma_restrack_entry res
        40 |     bool valid
    41:0-0 |     u8 no_track
        44 |     struct kref kref
        44 |       struct refcount_struct refcount
        44 |         atomic_t refs
        44 |           int counter
        48 |     struct completion comp
        48 |       unsigned int done
        52 |       struct swait_queue_head wait
        52 |         struct raw_spinlock lock
        52 |           arch_spinlock_t raw_lock
        52 |             volatile unsigned int slock
        56 |           unsigned int magic
        60 |           unsigned int owner_cpu
        64 |           void * owner
        68 |           struct lockdep_map dep_map
        68 |             struct lock_class_key * key
        72 |             struct lock_class *[2] class_cache
        80 |             const char * name
        84 |             u8 wait_type_outer
        85 |             u8 wait_type_inner
        86 |             u8 lock_type
        88 |             int cpu
        92 |             unsigned long ip
        96 |         struct list_head task_list
        96 |           struct list_head * next
       100 |           struct list_head * prev
       104 |     struct task_struct * task
       108 |     const char * kern_name
       112 |     enum rdma_restrack_type type
       116 |     bool user
       120 |     u32 id
           | [sizeof=124, align=4]

*** Dumping AST Record Layout
         0 | union ib_recv_wr::(anonymous at ../include/rdma/ib_verbs.h:1453:2)
         0 |   u64 wr_id
         0 |   struct ib_cqe * wr_cqe
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_recv_wr
         0 |   struct ib_recv_wr * next
         8 |   union ib_recv_wr::(anonymous at ../include/rdma/ib_verbs.h:1453:2) 
         8 |     u64 wr_id
         8 |     struct ib_cqe * wr_cqe
        16 |   struct ib_sge * sg_list
        20 |   int num_sge
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ib_qp_cap
         0 |   u32 max_send_wr
         4 |   u32 max_recv_wr
         8 |   u32 max_send_sge
        12 |   u32 max_recv_sge
        16 |   u32 max_inline_data
        20 |   u32 max_rdma_ctxs
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct ib_qp_init_attr
         0 |   void (*)(struct ib_event *, void *) event_handler
         4 |   void * qp_context
         8 |   struct ib_cq * send_cq
        12 |   struct ib_cq * recv_cq
        16 |   struct ib_srq * srq
        20 |   struct ib_xrcd * xrcd
        24 |   struct ib_qp_cap cap
        24 |     u32 max_send_wr
        28 |     u32 max_recv_wr
        32 |     u32 max_send_sge
        36 |     u32 max_recv_sge
        40 |     u32 max_inline_data
        44 |     u32 max_rdma_ctxs
        48 |   enum ib_sig_type sq_sig_type
        52 |   enum ib_qp_type qp_type
        56 |   u32 create_flags
        60 |   u32 port_num
        64 |   struct ib_rwq_ind_table * rwq_ind_tbl
        68 |   u32 source_qpn
           | [sizeof=72, align=4]

*** Dumping AST Record Layout
         0 | struct ib_qp
         0 |   struct ib_device * device
         4 |   struct ib_pd * pd
         8 |   struct ib_cq * send_cq
        12 |   struct ib_cq * recv_cq
        16 |   struct spinlock mr_lock
        16 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        16 |       struct raw_spinlock rlock
        16 |         arch_spinlock_t raw_lock
        16 |           volatile unsigned int slock
        20 |         unsigned int magic
        24 |         unsigned int owner_cpu
        28 |         void * owner
        32 |         struct lockdep_map dep_map
        32 |           struct lock_class_key * key
        36 |           struct lock_class *[2] class_cache
        44 |           const char * name
        48 |           u8 wait_type_outer
        49 |           u8 wait_type_inner
        50 |           u8 lock_type
        52 |           int cpu
        56 |           unsigned long ip
        16 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        16 |         u8[16] __padding
        32 |         struct lockdep_map dep_map
        32 |           struct lock_class_key * key
        36 |           struct lock_class *[2] class_cache
        44 |           const char * name
        48 |           u8 wait_type_outer
        49 |           u8 wait_type_inner
        50 |           u8 lock_type
        52 |           int cpu
        56 |           unsigned long ip
        60 |   int mrs_used
        64 |   struct list_head rdma_mrs
        64 |     struct list_head * next
        68 |     struct list_head * prev
        72 |   struct list_head sig_mrs
        72 |     struct list_head * next
        76 |     struct list_head * prev
        80 |   struct ib_srq * srq
        84 |   struct completion srq_completion
        84 |     unsigned int done
        88 |     struct swait_queue_head wait
        88 |       struct raw_spinlock lock
        88 |         arch_spinlock_t raw_lock
        88 |           volatile unsigned int slock
        92 |         unsigned int magic
        96 |         unsigned int owner_cpu
       100 |         void * owner
       104 |         struct lockdep_map dep_map
       104 |           struct lock_class_key * key
       108 |           struct lock_class *[2] class_cache
       116 |           const char * name
       120 |           u8 wait_type_outer
       121 |           u8 wait_type_inner
       122 |           u8 lock_type
       124 |           int cpu
       128 |           unsigned long ip
       132 |       struct list_head task_list
       132 |         struct list_head * next
       136 |         struct list_head * prev
       140 |   struct ib_xrcd * xrcd
       144 |   struct list_head xrcd_list
       144 |     struct list_head * next
       148 |     struct list_head * prev
       152 |   atomic_t usecnt
       152 |     int counter
       156 |   struct list_head open_list
       156 |     struct list_head * next
       160 |     struct list_head * prev
       164 |   struct ib_qp * real_qp
       168 |   struct ib_uqp_object * uobject
       172 |   void (*)(struct ib_event *, void *) event_handler
       176 |   void (*)(struct ib_event *, void *) registered_event_handler
       180 |   void * qp_context
       184 |   const struct ib_gid_attr * av_sgid_attr
       188 |   const struct ib_gid_attr * alt_path_sgid_attr
       192 |   u32 qp_num
       196 |   u32 max_write_sge
       200 |   u32 max_read_sge
       204 |   enum ib_qp_type qp_type
       208 |   struct ib_rwq_ind_table * rwq_ind_tbl
       212 |   struct ib_qp_security * qp_sec
       216 |   u32 port
       220 |   bool integrity_en
       224 |   struct rdma_restrack_entry res
       224 |     bool valid
   225:0-0 |     u8 no_track
       228 |     struct kref kref
       228 |       struct refcount_struct refcount
       228 |         atomic_t refs
       228 |           int counter
       232 |     struct completion comp
       232 |       unsigned int done
       236 |       struct swait_queue_head wait
       236 |         struct raw_spinlock lock
       236 |           arch_spinlock_t raw_lock
       236 |             volatile unsigned int slock
       240 |           unsigned int magic
       244 |           unsigned int owner_cpu
       248 |           void * owner
       252 |           struct lockdep_map dep_map
       252 |             struct lock_class_key * key
       256 |             struct lock_class *[2] class_cache
       264 |             const char * name
       268 |             u8 wait_type_outer
       269 |             u8 wait_type_inner
       270 |             u8 lock_type
       272 |             int cpu
       276 |             unsigned long ip
       280 |         struct list_head task_list
       280 |           struct list_head * next
       284 |           struct list_head * prev
       288 |     struct task_struct * task
       292 |     const char * kern_name
       296 |     enum rdma_restrack_type type
       300 |     bool user
       304 |     u32 id
       308 |   struct rdma_counter * counter
           | [sizeof=312, align=4]

*** Dumping AST Record Layout
         0 | union ib_cq::(anonymous at ../include/rdma/ib_verbs.h:1605:2)
         0 |   struct irq_poll iop
         0 |     struct list_head list
         0 |       struct list_head * next
         4 |       struct list_head * prev
         8 |     unsigned long state
        12 |     int weight
        16 |     irq_poll_fn * poll
         0 |   struct work_struct work
         0 |     atomic_t data
         0 |       int counter
         4 |     struct list_head entry
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     work_func_t func
        16 |     struct lockdep_map lockdep_map
        16 |       struct lock_class_key * key
        20 |       struct lock_class *[2] class_cache
        28 |       const char * name
        32 |       u8 wait_type_outer
        33 |       u8 wait_type_inner
        34 |       u8 lock_type
        36 |       int cpu
        40 |       unsigned long ip
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct ib_cq
         0 |   struct ib_device * device
         4 |   struct ib_ucq_object * uobject
         8 |   ib_comp_handler comp_handler
        12 |   void (*)(struct ib_event *, void *) event_handler
        16 |   void * cq_context
        20 |   int cqe
        24 |   unsigned int cqe_used
        28 |   atomic_t usecnt
        28 |     int counter
        32 |   enum ib_poll_context poll_ctx
        36 |   struct ib_wc * wc
        40 |   struct list_head pool_entry
        40 |     struct list_head * next
        44 |     struct list_head * prev
        48 |   union ib_cq::(anonymous at ../include/rdma/ib_verbs.h:1605:2) 
        48 |     struct irq_poll iop
        48 |       struct list_head list
        48 |         struct list_head * next
        52 |         struct list_head * prev
        56 |       unsigned long state
        60 |       int weight
        64 |       irq_poll_fn * poll
        48 |     struct work_struct work
        48 |       atomic_t data
        48 |         int counter
        52 |       struct list_head entry
        52 |         struct list_head * next
        56 |         struct list_head * prev
        60 |       work_func_t func
        64 |       struct lockdep_map lockdep_map
        64 |         struct lock_class_key * key
        68 |         struct lock_class *[2] class_cache
        76 |         const char * name
        80 |         u8 wait_type_outer
        81 |         u8 wait_type_inner
        82 |         u8 lock_type
        84 |         int cpu
        88 |         unsigned long ip
        92 |   struct workqueue_struct * comp_wq
        96 |   struct dim * dim
       104 |   ktime_t timestamp
   112:0-0 |   u8 interrupt
   112:1-1 |   u8 shared
       116 |   unsigned int comp_vector
       120 |   struct rdma_restrack_entry res
       120 |     bool valid
   121:0-0 |     u8 no_track
       124 |     struct kref kref
       124 |       struct refcount_struct refcount
       124 |         atomic_t refs
       124 |           int counter
       128 |     struct completion comp
       128 |       unsigned int done
       132 |       struct swait_queue_head wait
       132 |         struct raw_spinlock lock
       132 |           arch_spinlock_t raw_lock
       132 |             volatile unsigned int slock
       136 |           unsigned int magic
       140 |           unsigned int owner_cpu
       144 |           void * owner
       148 |           struct lockdep_map dep_map
       148 |             struct lock_class_key * key
       152 |             struct lock_class *[2] class_cache
       160 |             const char * name
       164 |             u8 wait_type_outer
       165 |             u8 wait_type_inner
       166 |             u8 lock_type
       168 |             int cpu
       172 |             unsigned long ip
       176 |         struct list_head task_list
       176 |           struct list_head * next
       180 |           struct list_head * prev
       184 |     struct task_struct * task
       188 |     const char * kern_name
       192 |     enum rdma_restrack_type type
       196 |     bool user
       200 |     u32 id
           | [sizeof=208, align=8]

*** Dumping AST Record Layout
         0 | union ib_wc::(unnamed at ../include/rdma/ib_verbs.h:1024:2)
         0 |   __be32 imm_data
         0 |   u32 invalidate_rkey
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_wc
         0 |   union ib_wc::(anonymous at ../include/rdma/ib_verbs.h:1015:2) 
         0 |     u64 wr_id
         0 |     struct ib_cqe * wr_cqe
         8 |   enum ib_wc_status status
        12 |   enum ib_wc_opcode opcode
        16 |   u32 vendor_err
        20 |   u32 byte_len
        24 |   struct ib_qp * qp
        28 |   union ib_wc::(unnamed at ../include/rdma/ib_verbs.h:1024:2) ex
        28 |     __be32 imm_data
        28 |     u32 invalidate_rkey
        32 |   u32 src_qp
        36 |   u32 slid
        40 |   int wc_flags
        44 |   u16 pkey_index
        46 |   u8 sl
        47 |   u8 dlid_path_bits
        48 |   u32 port_num
        52 |   u8[6] smac
        58 |   u16 vlan_id
        60 |   u8 network_hdr_type
           | [sizeof=64, align=8]

*** Dumping AST Record Layout
         0 | union ib_mr::(anonymous at ../include/rdma/ib_verbs.h:1842:2)
         0 |   struct ib_uobject * uobject
         0 |   struct list_head qp_entry
         0 |     struct list_head * next
         4 |     struct list_head * prev
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mr
         0 |   struct ib_device * device
         4 |   struct ib_pd * pd
         8 |   u32 lkey
        12 |   u32 rkey
        16 |   u64 iova
        24 |   u64 length
        32 |   unsigned int page_size
        36 |   enum ib_mr_type type
        40 |   bool need_inval
        44 |   union ib_mr::(anonymous at ../include/rdma/ib_verbs.h:1842:2) 
        44 |     struct ib_uobject * uobject
        44 |     struct list_head qp_entry
        44 |       struct list_head * next
        48 |       struct list_head * prev
        52 |   struct ib_dm * dm
        56 |   struct ib_sig_attrs * sig_attrs
        60 |   struct rdma_restrack_entry res
        60 |     bool valid
    61:0-0 |     u8 no_track
        64 |     struct kref kref
        64 |       struct refcount_struct refcount
        64 |         atomic_t refs
        64 |           int counter
        68 |     struct completion comp
        68 |       unsigned int done
        72 |       struct swait_queue_head wait
        72 |         struct raw_spinlock lock
        72 |           arch_spinlock_t raw_lock
        72 |             volatile unsigned int slock
        76 |           unsigned int magic
        80 |           unsigned int owner_cpu
        84 |           void * owner
        88 |           struct lockdep_map dep_map
        88 |             struct lock_class_key * key
        92 |             struct lock_class *[2] class_cache
       100 |             const char * name
       104 |             u8 wait_type_outer
       105 |             u8 wait_type_inner
       106 |             u8 lock_type
       108 |             int cpu
       112 |             unsigned long ip
       116 |         struct list_head task_list
       116 |           struct list_head * next
       120 |           struct list_head * prev
       124 |     struct task_struct * task
       128 |     const char * kern_name
       132 |     enum rdma_restrack_type type
       136 |     bool user
       140 |     u32 id
           | [sizeof=144, align=8]

*** Dumping AST Record Layout
         0 | struct roce_ah_attr
         0 |   u8[6] dmac
           | [sizeof=6, align=1]

*** Dumping AST Record Layout
         0 | struct opa_ah_attr
         0 |   u32 dlid
         4 |   u8 src_path_bits
         5 |   bool make_grd
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union rdma_ah_attr::(anonymous at ../include/rdma/ib_verbs.h:948:2)
         0 |   struct ib_ah_attr ib
         0 |     u16 dlid
         2 |     u8 src_path_bits
         0 |   struct roce_ah_attr roce
         0 |     u8[6] dmac
         0 |   struct opa_ah_attr opa
         0 |     u32 dlid
         4 |     u8 src_path_bits
         5 |     bool make_grd
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct rdma_ah_attr
         0 |   struct ib_global_route grh
         0 |     const struct ib_gid_attr * sgid_attr
         8 |     union ib_gid dgid
         8 |       u8[16] raw
         8 |       struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
         8 |         __be64 subnet_prefix
        16 |         __be64 interface_id
        24 |     u32 flow_label
        28 |     u8 sgid_index
        29 |     u8 hop_limit
        30 |     u8 traffic_class
        32 |   u8 sl
        33 |   u8 static_rate
        36 |   u32 port_num
        40 |   u8 ah_flags
        44 |   enum rdma_ah_attr_type type
        48 |   union rdma_ah_attr::(anonymous at ../include/rdma/ib_verbs.h:948:2) 
        48 |     struct ib_ah_attr ib
        48 |       u16 dlid
        50 |       u8 src_path_bits
        48 |     struct roce_ah_attr roce
        48 |       u8[6] dmac
        48 |     struct opa_ah_attr opa
        48 |       u32 dlid
        52 |       u8 src_path_bits
        53 |       bool make_grd
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct sg_append_table
         0 |   struct sg_table sgt
         0 |     struct scatterlist * sgl
         4 |     unsigned int nents
         8 |     unsigned int orig_nents
        12 |   struct scatterlist * prv
        16 |   unsigned int total_nents
           | [sizeof=20, align=4]

*** Dumping AST Record Layout
         0 | struct ib_umem
         0 |   struct ib_device * ibdev
         4 |   struct mm_struct * owning_mm
         8 |   u64 iova
        16 |   size_t length
        20 |   unsigned long address
    24:0-0 |   u32 writable
    24:1-1 |   u32 is_odp
    24:2-2 |   u32 is_dmabuf
        28 |   struct sg_append_table sgt_append
        28 |     struct sg_table sgt
        28 |       struct scatterlist * sgl
        32 |       unsigned int nents
        36 |       unsigned int orig_nents
        40 |     struct scatterlist * prv
        44 |     unsigned int total_nents
           | [sizeof=48, align=8]

*** Dumping AST Record Layout
         0 | struct ib_umem_dmabuf
         0 |   struct ib_umem umem
         0 |     struct ib_device * ibdev
         4 |     struct mm_struct * owning_mm
         8 |     u64 iova
        16 |     size_t length
        20 |     unsigned long address
    24:0-0 |     u32 writable
    24:1-1 |     u32 is_odp
    24:2-2 |     u32 is_dmabuf
        28 |     struct sg_append_table sgt_append
        28 |       struct sg_table sgt
        28 |         struct scatterlist * sgl
        32 |         unsigned int nents
        36 |         unsigned int orig_nents
        40 |       struct scatterlist * prv
        44 |       unsigned int total_nents
        48 |   struct dma_buf_attachment * attach
        52 |   struct sg_table * sgt
        56 |   struct scatterlist * first_sg
        60 |   struct scatterlist * last_sg
        64 |   unsigned long first_sg_offset
        68 |   unsigned long last_sg_trim
        72 |   void * private
    76:0-0 |   u8 pinned
           | [sizeof=80, align=8]

*** Dumping AST Record Layout
         0 | struct ib_block_iter
         0 |   struct scatterlist * __sg
         4 |   dma_addr_t __dma_addr
         8 |   size_t __sg_numblocks
        12 |   unsigned int __sg_nents
        16 |   unsigned int __sg_advance
        20 |   unsigned int __pg_bit
           | [sizeof=24, align=4]

*** Dumping AST Record Layout
         0 | struct uverbs_obj_type
         0 |   const struct uverbs_obj_type_class *const type_class
         4 |   size_t obj_size
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct uverbs_attr_spec::(unnamed at ../include/rdma/uverbs_ioctl.h:60:3)
         0 |   u16 len
         2 |   u16 min_len
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct uverbs_attr_spec::(unnamed at ../include/rdma/uverbs_ioctl.h:83:3)
         0 |   const struct uverbs_attr_spec * ids
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct uapi_radix_data::(unnamed at ../include/rdma/uverbs_ioctl.h:131:29)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct uapi_radix_data::(unnamed at ../include/rdma/uverbs_ioctl.h:143:31)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct uapi_radix_data::(unnamed at ../include/rdma/uverbs_ioctl.h:153:28)
       0:- |   int 
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct uapi_definition::(unnamed at ../include/rdma/uverbs_ioctl.h:335:3)
         0 |   u16 object_id
           | [sizeof=2, align=2]

*** Dumping AST Record Layout
         0 | union uverbs_ptr_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:604:2)
         0 |   void * ptr
         0 |   u64 data
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct uverbs_ptr_attr
         0 |   union uverbs_ptr_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:604:2) 
         0 |     void * ptr
         0 |     u64 data
         8 |   u16 len
        10 |   u16 uattr_idx
        12 |   u8 enum_id
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct uverbs_obj_attr
         0 |   struct ib_uobject * uobject
         4 |   const struct uverbs_api_attr * attr_elm
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct uverbs_objs_arr_attr
         0 |   struct ib_uobject ** uobjects
         4 |   u16 len
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union uverbs_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:624:2)
         0 |   struct uverbs_ptr_attr ptr_attr
         0 |     union uverbs_ptr_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:604:2) 
         0 |       void * ptr
         0 |       u64 data
         8 |     u16 len
        10 |     u16 uattr_idx
        12 |     u8 enum_id
         0 |   struct uverbs_obj_attr obj_attr
         0 |     struct ib_uobject * uobject
         4 |     const struct uverbs_api_attr * attr_elm
         0 |   struct uverbs_objs_arr_attr objs_arr_attr
         0 |     struct ib_uobject ** uobjects
         4 |     u16 len
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2)
         0 |   struct ib_udata driver_udata
         0 |     const void * inbuf
         4 |     void * outbuf
         8 |     size_t inlen
        12 |     size_t outlen
        16 |   struct ib_udata ucore
        16 |     const void * inbuf
        20 |     void * outbuf
        24 |     size_t inlen
        28 |     size_t outlen
        32 |   struct ib_uverbs_file * ufile
        36 |   struct ib_ucontext * context
        40 |   struct ib_uobject * uobject
        44 |   unsigned long[2] attr_present
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct uverbs_attr
         0 |   union uverbs_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:624:2) 
         0 |     struct uverbs_ptr_attr ptr_attr
         0 |       union uverbs_ptr_attr::(anonymous at ../include/rdma/uverbs_ioctl.h:604:2) 
         0 |         void * ptr
         0 |         u64 data
         8 |       u16 len
        10 |       u16 uattr_idx
        12 |       u8 enum_id
         0 |     struct uverbs_obj_attr obj_attr
         0 |       struct ib_uobject * uobject
         4 |       const struct uverbs_api_attr * attr_elm
         0 |     struct uverbs_objs_arr_attr objs_arr_attr
         0 |       struct ib_uobject ** uobjects
         4 |       u16 len
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct uverbs_attr_bundle_hdr
         0 |   struct ib_udata driver_udata
         0 |     const void * inbuf
         4 |     void * outbuf
         8 |     size_t inlen
        12 |     size_t outlen
        16 |   struct ib_udata ucore
        16 |     const void * inbuf
        20 |     void * outbuf
        24 |     size_t inlen
        28 |     size_t outlen
        32 |   struct ib_uverbs_file * ufile
        36 |   struct ib_ucontext * context
        40 |   struct ib_uobject * uobject
        44 |   unsigned long[2] attr_present
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | union uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2)
         0 |   struct uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2) 
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
        28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
         0 |   struct uverbs_attr_bundle_hdr hdr
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
    In file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:49:
In file included from ../include/rdma/uverbs_std_types.h:10:
../include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
   16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
      |                                 ^
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
../include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
    28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct uverbs_attr_bundle
         0 |   union uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2) 
         0 |     struct uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2) 
         0 |       struct ib_udata driver_udata
         0 |         const void * inbuf
         4 |         void * outbuf
         8 |         size_t inlen
        12 |         size_t outlen
        16 |       struct ib_udata ucore
        16 |         const void * inbuf
        20 |         void * outbuf
        24 |         size_t inlen
        28 |         size_t outlen
        32 |       struct ib_uverbs_file * ufile
        36 |       struct ib_ucontext * context
        40 |       struct ib_uobject * uobject
        44 |       unsigned long[2] attr_present
         0 |     struct uverbs_attr_bundle_hdr hdr
         0 |       struct ib_udata driver_udata
         0 |         const void * inbuf
         4 |         void * outbuf
         8 |         size_t inlen
        12 |         size_t outlen
        16 |       struct ib_udata ucore
        16 |         const void * inbuf
        20 |         void * outbuf
        24 |         size_t inlen
        28 |         size_t outlen
        32 |       struct ib_uverbs_file * ufile
        36 |       struct ib_ucontext * context
        40 |       struct ib_uobject * uobject
        44 |       unsigned long[2] attr_present
        56 |   struct uverbs_attr[] attrs
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uobject
         0 |   u64 user_handle
         8 |   struct ib_uverbs_file * ufile
        12 |   struct ib_ucontext * context
        16 |   void * object
        20 |   struct list_head list
        20 |     struct list_head * next
        24 |     struct list_head * prev
        28 |   struct ib_rdmacg_object cg_obj
        28 |   int id
        32 |   struct kref ref
        32 |     struct refcount_struct refcount
        32 |       atomic_t refs
        32 |         int counter
        36 |   atomic_t usecnt
        36 |     int counter
        40 |   struct callback_head rcu
        40 |     struct callback_head * next
        44 |     void (*)(struct callback_head *) func
        48 |   const struct uverbs_api_object * uapi_object
           | [sizeof=56, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uflow_object
         0 |   struct ib_uobject uobject
         0 |     u64 user_handle
         8 |     struct ib_uverbs_file * ufile
        12 |     struct ib_ucontext * context
        16 |     void * object
        20 |     struct list_head list
        20 |       struct list_head * next
        24 |       struct list_head * prev
        28 |     struct ib_rdmacg_object cg_obj
        28 |     int id
        32 |     struct kref ref
        32 |       struct refcount_struct refcount
        32 |         atomic_t refs
        32 |           int counter
        36 |     atomic_t usecnt
        36 |       int counter
        40 |     struct callback_head rcu
        40 |       struct callback_head * next
        44 |       void (*)(struct callback_head *) func
        48 |     const struct uverbs_api_object * uapi_object
        56 |   struct ib_uflow_resources * resources
           | [sizeof=64, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_async_event_desc
         0 |   __u64 element
         8 |   __u32 event_type
        12 |   __u32 reserved
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_comp_event_desc
         0 |   __u64 cq_handle
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | union ib_uverbs_event::(unnamed at ../drivers/infiniband/core/uverbs.h:166:2)
         0 |   struct ib_uverbs_async_event_desc async
         0 |     __u64 element
         8 |     __u32 event_type
        12 |     __u32 reserved
         0 |   struct ib_uverbs_comp_event_desc comp
         0 |     __u64 cq_handle
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uevent_object
         0 |   struct ib_uobject uobject
         0 |     u64 user_handle
         8 |     struct ib_uverbs_file * ufile
        12 |     struct ib_ucontext * context
        16 |     void * object
        20 |     struct list_head list
        20 |       struct list_head * next
        24 |       struct list_head * prev
        28 |     struct ib_rdmacg_object cg_obj
        28 |     int id
        32 |     struct kref ref
        32 |       struct refcount_struct refcount
        32 |         atomic_t refs
        32 |           int counter
        36 |     atomic_t usecnt
        36 |       int counter
        40 |     struct callback_head rcu
        40 |       struct callback_head * next
        44 |       void (*)(struct callback_head *) func
        48 |     const struct uverbs_api_object * uapi_object
        56 |   struct ib_uverbs_async_event_file * event_file
        60 |   struct list_head event_list
        60 |     struct list_head * next
        64 |     struct list_head * prev
        68 |   u32 events_reported
           | [sizeof=72, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:255:4)
         0 |   __u32 type
         4 |   __u16 size
         6 |   __u16 reserved
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:253:3)
         0 |   struct ib_uverbs_flow_spec_hdr hdr
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
         8 |     __u64[0] flow_spec_data
         0 |   struct ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:255:4) 
         0 |     __u32 type
         4 |     __u16 size
         6 |     __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_eth_filter
         0 |   __u8[6] dst_mac
         6 |   __u8[6] src_mac
        12 |   __be16 ether_type
        14 |   __be16 vlan_tag
           | [sizeof=16, align=2]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_eth
         0 |   union ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:932:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:934:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   struct ib_uverbs_flow_eth_filter val
         8 |     __u8[6] dst_mac
        14 |     __u8[6] src_mac
        20 |     __be16 ether_type
        22 |     __be16 vlan_tag
        24 |   struct ib_uverbs_flow_eth_filter mask
        24 |     __u8[6] dst_mac
        30 |     __u8[6] src_mac
        36 |     __be16 ether_type
        38 |     __be16 vlan_tag
           | [sizeof=40, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_ipv4_filter
         0 |   __be32 src_ip
         4 |   __be32 dst_ip
         8 |   __u8 proto
         9 |   __u8 tos
        10 |   __u8 ttl
        11 |   __u8 flags
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_ipv4
         0 |   union ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:954:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:956:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   struct ib_uverbs_flow_ipv4_filter val
         8 |     __be32 src_ip
        12 |     __be32 dst_ip
        16 |     __u8 proto
        17 |     __u8 tos
        18 |     __u8 ttl
        19 |     __u8 flags
        20 |   struct ib_uverbs_flow_ipv4_filter mask
        20 |     __be32 src_ip
        24 |     __be32 dst_ip
        28 |     __u8 proto
        29 |     __u8 tos
        30 |     __u8 ttl
        31 |     __u8 flags
           | [sizeof=32, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_esp_filter
         0 |   __u32 spi
         4 |   __u32 seq
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_esp
         0 |   union ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1080:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1082:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   struct ib_uverbs_flow_spec_esp_filter val
         8 |     __u32 spi
        12 |     __u32 seq
        16 |   struct ib_uverbs_flow_spec_esp_filter mask
        16 |     __u32 spi
        20 |     __u32 seq
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_tcp_udp_filter
         0 |   __be16 dst_port
         2 |   __be16 src_port
           | [sizeof=4, align=2]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_tcp_udp
         0 |   union ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:972:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:974:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   struct ib_uverbs_flow_tcp_udp_filter val
         8 |     __be16 dst_port
        10 |     __be16 src_port
        12 |   struct ib_uverbs_flow_tcp_udp_filter mask
        12 |     __be16 dst_port
        14 |     __be16 src_port
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_ipv6_filter
         0 |   __u8[16] src_ip
        16 |   __u8[16] dst_ip
        32 |   __be32 flow_label
        36 |   __u8 next_hdr
        37 |   __u8 traffic_class
        38 |   __u8 hop_limit
        39 |   __u8 reserved
           | [sizeof=40, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_ipv6
         0 |   union ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:995:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:997:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   struct ib_uverbs_flow_ipv6_filter val
         8 |     __u8[16] src_ip
        24 |     __u8[16] dst_ip
        40 |     __be32 flow_label
        44 |     __u8 next_hdr
        45 |     __u8 traffic_class
        46 |     __u8 hop_limit
        47 |     __u8 reserved
        48 |   struct ib_uverbs_flow_ipv6_filter mask
        48 |     __u8[16] src_ip
        64 |     __u8[16] dst_ip
        80 |     __be32 flow_label
        84 |     __u8 next_hdr
        85 |     __u8 traffic_class
        86 |     __u8 hop_limit
        87 |     __u8 reserved
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_tag
         0 |   union ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1008:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1010:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   __u32 tag_id
        12 |   __u32 reserved1
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_drop
         0 |   union ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1021:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1023:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
           | [sizeof=8, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_handle
         0 |   union ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1032:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1034:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   __u32 handle
        12 |   __u32 reserved1
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_flow_spec_action_count
         0 |   union ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1045:2) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1047:3) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |   __u32 handle
        12 |   __u32 reserved1
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | union ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:252:2)
         0 |   union ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:253:3) 
         0 |     struct ib_uverbs_flow_spec_hdr hdr
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         8 |       __u64[0] flow_spec_data
         0 |     struct ib_uverbs_flow_spec::(anonymous at ../drivers/infiniband/core/uverbs.h:255:4) 
         0 |       __u32 type
         4 |       __u16 size
         6 |       __u16 reserved
         0 |   struct ib_uverbs_flow_spec_eth eth
         0 |     union ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:932:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_eth::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:934:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     struct ib_uverbs_flow_eth_filter val
         8 |       __u8[6] dst_mac
        14 |       __u8[6] src_mac
        20 |       __be16 ether_type
        22 |       __be16 vlan_tag
        24 |     struct ib_uverbs_flow_eth_filter mask
        24 |       __u8[6] dst_mac
        30 |       __u8[6] src_mac
        36 |       __be16 ether_type
        38 |       __be16 vlan_tag
         0 |   struct ib_uverbs_flow_spec_ipv4 ipv4
         0 |     union ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:954:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_ipv4::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:956:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     struct ib_uverbs_flow_ipv4_filter val
         8 |       __be32 src_ip
        12 |       __be32 dst_ip
        16 |       __u8 proto
        17 |       __u8 tos
        18 |       __u8 ttl
        19 |       __u8 flags
        20 |     struct ib_uverbs_flow_ipv4_filter mask
        20 |       __be32 src_ip
        24 |       __be32 dst_ip
        28 |       __u8 proto
        29 |       __u8 tos
        30 |       __u8 ttl
        31 |       __u8 flags
         0 |   struct ib_uverbs_flow_spec_esp esp
         0 |     union ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1080:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_esp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1082:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     struct ib_uverbs_flow_spec_esp_filter val
         8 |       __u32 spi
        12 |       __u32 seq
        16 |     struct ib_uverbs_flow_spec_esp_filter mask
        16 |       __u32 spi
        20 |       __u32 seq
         0 |   struct ib_uverbs_flow_spec_tcp_udp tcp_udp
         0 |     union ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:972:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_tcp_udp::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:974:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     struct ib_uverbs_flow_tcp_udp_filter val
         8 |       __be16 dst_port
        10 |       __be16 src_port
        12 |     struct ib_uverbs_flow_tcp_udp_filter mask
        12 |       __be16 dst_port
        14 |       __be16 src_port
         0 |   struct ib_uverbs_flow_spec_ipv6 ipv6
         0 |     union ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:995:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_ipv6::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:997:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     struct ib_uverbs_flow_ipv6_filter val
         8 |       __u8[16] src_ip
        24 |       __u8[16] dst_ip
        40 |       __be32 flow_label
        44 |       __u8 next_hdr
        45 |       __u8 traffic_class
        46 |       __u8 hop_limit
        47 |       __u8 reserved
        48 |     struct ib_uverbs_flow_ipv6_filter mask
        48 |       __u8[16] src_ip
        64 |       __u8[16] dst_ip
        80 |       __be32 flow_label
        84 |       __u8 next_hdr
        85 |       __u8 traffic_class
        86 |       __u8 hop_limit
        87 |       __u8 reserved
         0 |   struct ib_uverbs_flow_spec_action_tag flow_tag
         0 |     union ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1008:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_action_tag::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1010:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     __u32 tag_id
        12 |     __u32 reserved1
         0 |   struct ib_uverbs_flow_spec_action_drop drop
         0 |     union ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1021:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_action_drop::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1023:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         0 |   struct ib_uverbs_flow_spec_action_handle action
         0 |     union ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1032:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_action_handle::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1034:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     __u32 handle
        12 |     __u32 reserved1
         0 |   struct ib_uverbs_flow_spec_action_count flow_count
         0 |     union ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1045:2) 
         0 |       struct ib_uverbs_flow_spec_hdr hdr
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |         __u64[0] flow_spec_data
         0 |       struct ib_uverbs_flow_spec_action_count::(anonymous at ../include/uapi/rdma/ib_user_verbs.h:1047:3) 
         0 |         __u32 type
         4 |         __u16 size
         6 |         __u16 reserved
         8 |     __u32 handle
        12 |     __u32 reserved1
           | [sizeof=88, align=8]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_event_queue
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   int is_closed
        48 |   struct wait_queue_head poll_wait
        48 |     struct spinlock lock
        48 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        48 |         struct raw_spinlock rlock
        48 |           arch_spinlock_t raw_lock
        48 |             volatile unsigned int slock
        52 |           unsigned int magic
        56 |           unsigned int owner_cpu
        60 |           void * owner
        64 |           struct lockdep_map dep_map
        64 |             struct lock_class_key * key
        68 |             struct lock_class *[2] class_cache
        76 |             const char * name
        80 |             u8 wait_type_outer
        81 |             u8 wait_type_inner
        82 |             u8 lock_type
        84 |             int cpu
        88 |             unsigned long ip
        48 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        48 |           u8[16] __padding
        64 |           struct lockdep_map dep_map
        64 |             struct lock_class_key * key
        68 |             struct lock_class *[2] class_cache
        76 |             const char * name
        80 |             u8 wait_type_outer
        81 |             u8 wait_type_inner
        82 |             u8 lock_type
        84 |             int cpu
        88 |             unsigned long ip
        92 |     struct list_head head
        92 |       struct list_head * next
        96 |       struct list_head * prev
       100 |   struct fasync_struct * async_queue
       104 |   struct list_head event_list
       104 |     struct list_head * next
       108 |     struct list_head * prev
           | [sizeof=112, align=4]

*** Dumping AST Record Layout
         0 | struct ib_event_handler
         0 |   struct ib_device * device
         4 |   void (*)(struct ib_event_handler *, struct ib_event *) handler
         8 |   struct list_head list
         8 |     struct list_head * next
        12 |     struct list_head * prev
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_uverbs_async_event_file
         0 |   struct ib_uobject uobj
         0 |     u64 user_handle
         8 |     struct ib_uverbs_file * ufile
        12 |     struct ib_ucontext * context
        16 |     void * object
        20 |     struct list_head list
        20 |       struct list_head * next
        24 |       struct list_head * prev
        28 |     struct ib_rdmacg_object cg_obj
        28 |     int id
        32 |     struct kref ref
        32 |       struct refcount_struct refcount
        32 |         atomic_t refs
        32 |           int counter
        36 |     atomic_t usecnt
        36 |       int counter
        40 |     struct callback_head rcu
        40 |       struct callback_head * next
        44 |       void (*)(struct callback_head *) func
        48 |     const struct uverbs_api_object * uapi_object
        56 |   struct ib_uverbs_event_queue ev_queue
        56 |     struct spinlock lock
        56 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        56 |         struct raw_spinlock rlock
        56 |           arch_spinlock_t raw_lock
        56 |             volatile unsigned int slock
        60 |           unsigned int magic
        64 |           unsigned int owner_cpu
        68 |           void * owner
        72 |           struct lockdep_map dep_map
        72 |             struct lock_class_key * key
        76 |             struct lock_class *[2] class_cache
        84 |             const char * name
        88 |             u8 wait_type_outer
        89 |             u8 wait_type_inner
        90 |             u8 lock_type
        92 |             int cpu
        96 |             unsigned long ip
        56 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        56 |           u8[16] __padding
        72 |           struct lockdep_map dep_map
        72 |             struct lock_class_key * key
        76 |             struct lock_class *[2] class_cache
        84 |             const char * name
        88 |             u8 wait_type_outer
        89 |             u8 wait_type_inner
        90 |             u8 lock_type
        92 |             int cpu
        96 |             unsigned long ip
       100 |     int is_closed
       104 |     struct wait_queue_head poll_wait
       104 |       struct spinlock lock
       104 |         union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       104 |           struct raw_spinlock rlock
       104 |             arch_spinlock_t raw_lock
       104 |               volatile unsigned int slock
       108 |             unsigned int magic
       112 |             unsigned int owner_cpu
       116 |             void * owner
       120 |             struct lockdep_map dep_map
       120 |               struct lock_class_key * key
       124 |               struct lock_class *[2] class_cache
       132 |               const char * name
       136 |               u8 wait_type_outer
       137 |               u8 wait_type_inner
       138 |               u8 lock_type
       140 |               int cpu
       144 |               unsigned long ip
       104 |           struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       104 |             u8[16] __padding
       120 |             struct lockdep_map dep_map
       120 |               struct lock_class_key * key
       124 |               struct lock_class *[2] class_cache
       132 |               const char * name
       136 |               u8 wait_type_outer
       137 |               u8 wait_type_inner
       138 |               u8 lock_type
       140 |               int cpu
       144 |               unsigned long ip
       148 |       struct list_head head
       148 |         struct list_head * next
       152 |         struct list_head * prev
       156 |     struct fasync_struct * async_queue
       160 |     struct list_head event_list
       160 |       struct list_head * next
       164 |       struct list_head * prev
       168 |   struct ib_event_handler event_handler
       168 |     struct ib_device * device
       172 |     void (*)(struct ib_event_handler *, struct ib_event *) handler
       176 |     struct list_head list
       176 |       struct list_head * next
       180 |       struct list_head * prev
           | [sizeof=184, align=8]

*** Dumping AST Record Layout
         0 | struct net_generic::(unnamed at ../include/net/netns/generic.h:36:3)
           | [sizeof=0, align=1]

*** Dumping AST Record Layout
         0 | struct net_generic::(unnamed at ../include/net/netns/generic.h:31:3)
         0 |   unsigned int len
         4 |   struct callback_head rcu
         4 |     struct callback_head * next
         8 |     void (*)(struct callback_head *) func
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct net_generic::(anonymous at ../include/net/netns/generic.h:36:3)
         0 |   struct net_generic::(unnamed at ../include/net/netns/generic.h:36:3) __empty_ptr
         0 |   void *[] ptr
           | [sizeof=0, align=4]

*** Dumping AST Record Layout
         0 | union net_generic::(anonymous at ../include/net/netns/generic.h:30:2)
         0 |   struct net_generic::(unnamed at ../include/net/netns/generic.h:31:3) s
         0 |     unsigned int len
         4 |     struct callback_head rcu
         4 |       struct callback_head * next
         8 |       void (*)(struct callback_head *) func
         0 |   struct net_generic::(anonymous at ../include/net/netns/generic.h:36:3) 
         0 |     struct net_generic::(unnamed at ../include/net/netns/generic.h:36:3) __empty_ptr
         0 |     void *[] ptr
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_hdr
         0 |   u8 base_version
         1 |   u8 mgmt_class
         2 |   u8 class_version
         3 |   u8 method
         4 |   __be16 status
         6 |   __be16 class_specific
         8 |   __be64 tid
        16 |   __be16 attr_id
        18 |   __be16 resv
        20 |   __be32 attr_mod
           | [sizeof=24, align=8]

*** Dumping AST Record Layout
         0 | struct ib_mad_notice_attr::(unnamed at ../include/rdma/ib_mad.h:384:3)
         0 |   u8[54] details
           | [sizeof=54, align=1]

*** Dumping AST Record Layout
         0 | struct ib_rmpp_hdr
         0 |   u8 rmpp_version
         1 |   u8 rmpp_type
         2 |   u8 rmpp_rtime_flags
         3 |   u8 rmpp_status
         4 |   __be32 seg_num
         8 |   __be32 paylen_newwin
           | [sizeof=12, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_send_buf
         0 |   struct ib_mad_send_buf * next
         4 |   void * mad
         8 |   struct ib_mad_agent * mad_agent
        12 |   struct ib_ah * ah
        16 |   void *[2] context
        24 |   int hdr_len
        28 |   int data_len
        32 |   int seg_count
        36 |   int seg_size
        40 |   int seg_rmpp_size
        44 |   int timeout_ms
        48 |   int retries
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:36:3)
         0 |   uint8_t[2016] data
           | [sizeof=2016, align=1]

*** Dumping AST Record Layout
         0 | struct ib_smp
         0 |   u8 base_version
         1 |   u8 mgmt_class
         2 |   u8 class_version
         3 |   u8 method
         4 |   __be16 status
         6 |   u8 hop_ptr
         7 |   u8 hop_cnt
         8 |   __be64 tid
        16 |   __be16 attr_id
        18 |   __be16 resv
        20 |   __be32 attr_mod
        24 |   __be64 mkey
        32 |   __be16 dr_slid
        34 |   __be16 dr_dlid
        36 |   u8[28] reserved
        64 |   u8[64] data
       128 |   u8[64] initial_path
       192 |   u8[64] return_path
           | [sizeof=256, align=1]

*** Dumping AST Record Layout
         0 | struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:39:3)
         0 |   __be32 dr_slid
         4 |   __be32 dr_dlid
         8 |   u8[64] initial_path
        72 |   u8[64] return_path
       136 |   u8[8] reserved
       144 |   u8[1872] data
           | [sizeof=2016, align=4]

*** Dumping AST Record Layout
         0 | union opa_smp::(unnamed at ../include/rdma/opa_smi.h:35:2)
         0 |   struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:36:3) lid
         0 |     uint8_t[2016] data
         0 |   struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:39:3) dr
         0 |     __be32 dr_slid
         4 |     __be32 dr_dlid
         8 |     u8[64] initial_path
        72 |     u8[64] return_path
       136 |     u8[8] reserved
       144 |     u8[1872] data
           | [sizeof=2016, align=4]

*** Dumping AST Record Layout
         0 | struct opa_smp
         0 |   u8 base_version
         1 |   u8 mgmt_class
         2 |   u8 class_version
         3 |   u8 method
         4 |   __be16 status
         6 |   u8 hop_ptr
         7 |   u8 hop_cnt
         8 |   __be64 tid
        16 |   __be16 attr_id
        18 |   __be16 resv
        20 |   __be32 attr_mod
        24 |   __be64 mkey
        32 |   union opa_smp::(unnamed at ../include/rdma/opa_smi.h:35:2) route
        32 |     struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:36:3) lid
        32 |       uint8_t[2016] data
        32 |     struct opa_smp::(unnamed at ../include/rdma/opa_smi.h:39:3) dr
        32 |       __be32 dr_slid
        36 |       __be32 dr_dlid
        40 |       u8[64] initial_path
       104 |       u8[64] return_path
       168 |       u8[8] reserved
       176 |       u8[1872] data
           | [sizeof=2048, align=1]

*** Dumping AST Record Layout
         0 | struct ib_cqe
         0 |   void (*)(struct ib_cq *, struct ib_wc *) done
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_list_head
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct ib_cqe cqe
         8 |     void (*)(struct ib_cq *, struct ib_wc *) done
        12 |   struct ib_mad_queue * mad_queue
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | union ib_mad_recv_buf::(anonymous at ../include/rdma/ib_mad.h:609:2)
         0 |   struct ib_mad * mad
         0 |   struct opa_mad * opa_mad
           | [sizeof=4, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_recv_buf
         0 |   struct list_head list
         0 |     struct list_head * next
         4 |     struct list_head * prev
         8 |   struct ib_grh * grh
        12 |   union ib_mad_recv_buf::(anonymous at ../include/rdma/ib_mad.h:609:2) 
        12 |     struct ib_mad * mad
        12 |     struct opa_mad * opa_mad
           | [sizeof=16, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_recv_wc
         0 |   struct ib_wc * wc
         4 |   struct ib_mad_recv_buf recv_buf
         4 |     struct list_head list
         4 |       struct list_head * next
         8 |       struct list_head * prev
        12 |     struct ib_grh * grh
        16 |     union ib_mad_recv_buf::(anonymous at ../include/rdma/ib_mad.h:609:2) 
        16 |       struct ib_mad * mad
        16 |       struct opa_mad * opa_mad
        20 |   struct list_head rmpp_list
        20 |     struct list_head * next
        24 |     struct list_head * prev
        28 |   int mad_len
        32 |   size_t mad_seg_size
           | [sizeof=36, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_private_header
         0 |   struct ib_mad_list_head mad_list
         0 |     struct list_head list
         0 |       struct list_head * next
         4 |       struct list_head * prev
         8 |     struct ib_cqe cqe
         8 |       void (*)(struct ib_cq *, struct ib_wc *) done
        12 |     struct ib_mad_queue * mad_queue
        16 |   struct ib_mad_recv_wc recv_wc
        16 |     struct ib_wc * wc
        20 |     struct ib_mad_recv_buf recv_buf
        20 |       struct list_head list
        20 |         struct list_head * next
        24 |         struct list_head * prev
        28 |       struct ib_grh * grh
        32 |       union ib_mad_recv_buf::(anonymous at ../include/rdma/ib_mad.h:609:2) 
        32 |         struct ib_mad * mad
        32 |         struct opa_mad * opa_mad
        36 |     struct list_head rmpp_list
        36 |       struct list_head * next
        40 |       struct list_head * prev
        44 |     int mad_len
        48 |     size_t mad_seg_size
        52 |   struct ib_wc wc
        52 |     union ib_wc::(anonymous at ../include/rdma/ib_verbs.h:1015:2) 
        52 |       u64 wr_id
        52 |       struct ib_cqe * wr_cqe
        60 |     enum ib_wc_status status
        64 |     enum ib_wc_opcode opcode
        68 |     u32 vendor_err
        72 |     u32 byte_len
        76 |     struct ib_qp * qp
        80 |     union ib_wc::(unnamed at ../include/rdma/ib_verbs.h:1024:2) ex
        80 |       __be32 imm_data
        80 |       u32 invalidate_rkey
        84 |     u32 src_qp
        88 |     u32 slid
        92 |     int wc_flags
        96 |     u16 pkey_index
        98 |     u8 sl
        99 |     u8 dlid_path_bits
       100 |     u32 port_num
       104 |     u8[6] smac
       110 |     u16 vlan_id
       112 |     u8 network_hdr_type
       116 |   u64 mapping
           | [sizeof=124, align=1]

*** Dumping AST Record Layout
         0 | struct ib_mad_agent
         0 |   struct ib_device * device
         4 |   struct ib_qp * qp
         8 |   ib_mad_recv_handler recv_handler
        12 |   ib_mad_send_handler send_handler
        16 |   void * context
        20 |   u32 hi_tid
        24 |   u32 flags
        28 |   void * security
        32 |   struct list_head mad_agent_sec_list
        32 |     struct list_head * next
        36 |     struct list_head * prev
        40 |   u8 port_num
        41 |   u8 rmpp_version
        42 |   bool smp_allowed
           | [sizeof=44, align=4]

*** Dumping AST Record Layout
         0 | struct ib_sge
         0 |   u64 addr
         8 |   u32 length
        12 |   u32 lkey
           | [sizeof=16, align=8]

*** Dumping AST Record Layout
         0 | struct ib_mad_mgmt_version_table
         0 |   struct ib_mad_mgmt_class_table * class
         4 |   struct ib_mad_mgmt_vendor_class_table * vendor
           | [sizeof=8, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_queue
         0 |   struct spinlock lock
         0 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         0 |       struct raw_spinlock rlock
         0 |         arch_spinlock_t raw_lock
         0 |           volatile unsigned int slock
         4 |         unsigned int magic
         8 |         unsigned int owner_cpu
        12 |         void * owner
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
         0 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         0 |         u8[16] __padding
        16 |         struct lockdep_map dep_map
        16 |           struct lock_class_key * key
        20 |           struct lock_class *[2] class_cache
        28 |           const char * name
        32 |           u8 wait_type_outer
        33 |           u8 wait_type_inner
        34 |           u8 lock_type
        36 |           int cpu
        40 |           unsigned long ip
        44 |   struct list_head list
        44 |     struct list_head * next
        48 |     struct list_head * prev
        52 |   int count
        56 |   int max_active
        60 |   struct ib_mad_qp_info * qp_info
           | [sizeof=64, align=4]

*** Dumping AST Record Layout
         0 | struct ib_mad_qp_info
         0 |   struct ib_mad_port_private * port_priv
         4 |   struct ib_qp * qp
         8 |   struct ib_mad_queue send_queue
         8 |     struct spinlock lock
         8 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
         8 |         struct raw_spinlock rlock
         8 |           arch_spinlock_t raw_lock
         8 |             volatile unsigned int slock
        12 |           unsigned int magic
        16 |           unsigned int owner_cpu
        20 |           void * owner
        24 |           struct lockdep_map dep_map
        24 |             struct lock_class_key * key
        28 |             struct lock_class *[2] class_cache
        36 |             const char * name
        40 |             u8 wait_type_outer
        41 |             u8 wait_type_inner
        42 |             u8 lock_type
        44 |             int cpu
        48 |             unsigned long ip
         8 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
         8 |           u8[16] __padding
        24 |           struct lockdep_map dep_map
        24 |             struct lock_class_key * key
        28 |             struct lock_class *[2] class_cache
        36 |             const char * name
        40 |             u8 wait_type_outer
        41 |             u8 wait_type_inner
        42 |             u8 lock_type
        44 |             int cpu
        48 |             unsigned long ip
        52 |     struct list_head list
        52 |       struct list_head * next
        56 |       struct list_head * prev
        60 |     int count
        64 |     int max_active
        68 |     struct ib_mad_qp_info * qp_info
        72 |   struct ib_mad_queue recv_queue
        72 |     struct spinlock lock
        72 |       union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
        72 |         struct raw_spinlock rlock
        72 |           arch_spinlock_t raw_lock
        72 |             volatile unsigned int slock
        76 |           unsigned int magic
        80 |           unsigned int owner_cpu
        84 |           void * owner
        88 |           struct lockdep_map dep_map
        88 |             struct lock_class_key * key
        92 |             struct lock_class *[2] class_cache
       100 |             const char * name
       104 |             u8 wait_type_outer
       105 |             u8 wait_type_inner
       106 |             u8 lock_type
       108 |             int cpu
       112 |             unsigned long ip
        72 |         struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
        72 |           u8[16] __padding
        88 |           struct lockdep_map dep_map
        88 |             struct lock_class_key * key
        92 |             struct lock_class *[2] class_cache
       100 |             const char * name
       104 |             u8 wait_type_outer
       105 |             u8 wait_type_inner
       106 |             u8 lock_type
       108 |             int cpu
       112 |             unsigned long ip
       116 |     struct list_head list
       116 |       struct list_head * next
       120 |       struct list_head * prev
       124 |     int count
       128 |     int max_active
       132 |     struct ib_mad_qp_info * qp_info
       136 |   struct list_head overflow_list
       136 |     struct list_head * next
       140 |     struct list_head * prev
       144 |   struct spinlock snoop_lock
       144 |     union spinlock::(anonymous at ../include/linux/spinlock_types.h:18:2) 
       144 |       struct raw_spinlock rlock
       144 |         arch_spinlock_t raw_lock
       144 |           volatile unsigned int slock
       148 |         unsigned int magic
       152 |         unsigned int owner_cpu
       156 |         void * owner
       160 |         struct lockdep_map dep_map
       160 |           struct lock_class_key * key
       164 |           struct lock_class *[2] class_cache
       172 |           const char * name
       176 |           u8 wait_type_outer
       177 |           u8 wait_type_inner
       178 |           u8 lock_type
       180 |           int cpu
       184 |           unsigned long ip
       144 |       struct spinlock::(anonymous at ../include/linux/spinlock_types.h:23:3) 
       144 |         u8[16] __padding
       160 |         struct lockdep_map dep_map
       160 |           struct lock_class_key * key
       164 |           struct lock_class *[2] class_cache
       172 |           const char * name
       176 |           u8 wait_type_outer
       177 |           u8 wait_type_inner
       178 |           u8 lock_type
       180 |           int cpu
       184 |           unsigned long ip
       188 |   struct ib_mad_snoop_private ** snoop_table
       192 |   int snoop_table_size
       196 |   atomic_t snoop_count
       196 |     int counter
           | [sizeof=200, align=4]

*** Dumping AST Record Layout
         0 | struct ib_qp_attr
         0 |   enum ib_qp_state qp_state
         4 |   enum ib_qp_state cur_qp_state
         8 |   enum ib_mtu path_mtu
        12 |   enum ib_mig_state path_mig_state
        16 |   u32 qkey
        20 |   u32 rq_psn
        24 |   u32 sq_psn
        28 |   u32 dest_qp_num
        32 |   int qp_access_flags
        36 |   struct ib_qp_cap cap
        36 |     u32 max_send_wr
        40 |     u32 max_recv_wr
        44 |     u32 max_send_sge
        48 |     u32 max_recv_sge
        52 |     u32 max_inline_data
        56 |     u32 max_rdma_ctxs
        64 |   struct rdma_ah_attr ah_attr
        64 |     struct ib_global_route grh
        64 |       const struct ib_gid_attr * sgid_attr
        72 |       union ib_gid dgid
        72 |         u8[16] raw
        72 |         struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
        72 |           __be64 subnet_prefix
        80 |           __be64 interface_id
        88 |       u32 flow_label
        92 |       u8 sgid_index
        93 |       u8 hop_limit
        94 |       u8 traffic_class
        96 |     u8 sl
        97 |     u8 static_rate
       100 |     u32 port_num
       104 |     u8 ah_flags
       108 |     enum rdma_ah_attr_type type
       112 |     union rdma_ah_attr::(anonymous at ../include/rdma/ib_verbs.h:948:2) 
       112 |       struct ib_ah_attr ib
       112 |         u16 dlid
       114 |   6 warnings and 1 error generated.
      u8 src_path_bits
       112 |       struct roce_ah_attr roce
       112 |         u8[6] dmac
       112 |       struct opa_ah_attr opa
       112 |         u32 dlid
       116 |         u8 src_path_bits
       117 |         bool make_grd
       120 |   struct rdma_ah_attr alt_ah_attr
       120 |     struct ib_global_route grh
       120 |       const struct ib_gid_attr * sgid_attr
       128 |       union ib_gid dgid
       128 |         u8[16] raw
       128 |         struct ib_gid::(unnamed at ../include/rdma/ib_verbs.h:135:2) global
       128 |           __be64 subnet_prefix
       136 |           __be64 interface_id
       144 |       u32 flow_label
       148 |       u8 sgid_index
       149 |       u8 hop_limit
       150 |       u8 traffic_class
       152 |     u8 sl
       153 |     u8 static_rate
       156 |     u32 port_num
       160 |     u8 ah_flags
       164 |     enum rdma_ah_attr_type type
       168 |     union rdma_ah_attr::(anonymous at ../include/rdma/ib_verbs.h:948:2) 
       168 |       struct ib_ah_attr ib
       168 |         u16 dlid
       170 |         u8 src_path_bits
       168 |       struct roce_ah_attr roce
       168 |         u8[6] dmac
       168 |       struct opa_ah_attr opa
       168 |         u32 dlid
       172 |         u8 src_path_bits
       173 |         bool make_grd
       176 |   u16 pkey_index
       178 |   u16 alt_pkey_index
       180 |   u8 en_sqd_async_notify
       181 |   u8 sq_draining
       182 |   u8 max_rd_atomic
       183 |   u8 max_dest_rd_atomic
       184 |   u8 min_rnr_timer
       188 |   u32 port_num
       192 |   u8 timeout
       193 |   u8 retry_cnt
       194 |   u8 rnr_retry
       196 |   u32 alt_port_num
       200 |   u8 alt_timeout
       204 |   u32 rate_limit
       208 |   struct net_device * xmit_slave
           | [sizeof=216, align=8]

*** Dumping AST Record Layout
         0 | struct rdma_umap_priv
         0 |   struct vm_area_struct * vma
         4 |   struct list_head list
         4 |     struct list_head * next
         8 |     struct list_head * prev
        12 |   struct rdma_user_mmap_entry * entry
           | [sizeof=16, align=4]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
  2024-08-14  3:27               ` Brian Cain
@ 2024-08-14 16:30                 ` Steven Walk
  0 siblings, 0 replies; 10+ messages in thread
From: Steven Walk @ 2024-08-14 16:30 UTC (permalink / raw)
  To: Brian Cain, Nathan Chancellor, Gustavo A. R. Silva,
	Steven Walk (QUIC)
  Cc: kernel test robot, Gustavo A. R. Silva, llvm@lists.linux.dev,
	oe-kbuild-all@lists.linux.dev, LKML,
	linux-hexagon@vger.kernel.org, Sid Manning, Sundeep Kushwaha

Hello Brian, et.al

This is not related to clang-20.

The compiler is asserting because the expression is not considering the alignment
of the 'attrs' member.  This can be seen in the output when -fdump-record-layouts 
is added the compiler (and the assert commented out).  The union in 
question is:

struct uverbs_attr_bundle {
   union { <union entries including uverbs_attr_bundle_hdr> }
   struct uverbs_attr attrs[];
};

Static_assert(
   __builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr), 
   "struct member likely outside of struct_group_tagged()")
note: expression evaluates to '56 == 52'

Regards,

Steve Walk


Evidence below.

28127 *** Dumping AST Record Layout
28128          0 | struct uverbs_attr_bundle
28129          0 |   union uverbs_attr_bundle::(anonymous at noAssert.c:94178:2)
28130          0 |     struct uverbs_attr_bundle::(anonymous at noAssert.c:94178:10)
28131          0 |       struct ib_udata driver_udata
28132          0 |         const void * inbuf
28133          4 |         void * outbuf
28134          8 |         size_t inlen
28135         12 |         size_t outlen
28136         16 |       struct ib_udata ucore
28137         16 |         const void * inbuf
28138         20 |         void * outbuf
28139         24 |         size_t inlen
28140         28 |         size_t outlen
28141         32 |       struct ib_uverbs_file * ufile
28142         36 |       struct ib_ucontext * context
28143         40 |       struct ib_uobject * uobject
28144         44 |       unsigned long[2] attr_present
28145          0 |     struct uverbs_attr_bundle_hdr hdr 
28146          0 |       struct ib_udata driver_udata
28147          0 |         const void * inbuf
28148          4 |         void * outbuf
28149          8 |         size_t inlen
28150         12 |         size_t outlen
28151         16 |       struct ib_udata ucore
28152         16 |         const void * inbuf
28153         20 |         void * outbuf
28154         24 |         size_t inlen
28155         28 |         size_t outlen
28156         32 |       struct ib_uverbs_file * ufile
28157         36 |       struct ib_ucontext * context
28158         40 |       struct ib_uobject * uobject
28159         44 |       unsigned long[2] attr_present
28160         56 |   struct uverbs_attr[] attrs //<<--- Field used in _Static_assert
28161            | [sizeof=56, align=8]
28162

28022 *** Dumping AST Record Layout
28023          0 | union uverbs_attr::(anonymous at noAssert.c:94169:2)
28024          0 |   struct uverbs_ptr_attr ptr_attr
28025          0 |     union uverbs_ptr_attr::(anonymous at noAssert.c:94149:2)
28026          0 |       void * ptr
28027          0 |       u64 data
28028          8 |     u16 len
28029         10 |     u16 uattr_idx
28030         12 |     u8 enum_id
28031          0 |   struct uverbs_obj_attr obj_attr
28032          0 |     struct ib_uobject * uobject
28033          4 |     const struct uverbs_api_attr * attr_elm
28034          0 |   struct uverbs_objs_arr_attr objs_arr_attr
28035          0 |     struct ib_uobject ** uobjects
28036          4 |     u16 len
28037            | [sizeof=16, align=8] //<<--- Required alignment
28038

This record is in the union that cries.

28093 *** Dumping AST Record Layout
28094          0 | union uverbs_attr_bundle::(anonymous at noAssert.c:94178:2)
28095          0 |   struct uverbs_attr_bundle::(anonymous at noAssert.c:94178:10)
28096          0 |     struct ib_udata driver_udata
28097          0 |       const void * inbuf
28098          4 |       void * outbuf
28099          8 |       size_t inlen
28100         12 |       size_t outlen
28101         16 |     struct ib_udata ucore
28102         16 |       const void * inbuf
28103         20 |       void * outbuf
28104         24 |       size_t inlen
28105         28 |       size_t outlen
28106         32 |     struct ib_uverbs_file * ufile
28107         36 |     struct ib_ucontext * context
28108         40 |     struct ib_uobject * uobject
28109         44 |     unsigned long[2] attr_present
28110          0 |   struct uverbs_attr_bundle_hdr hdr
28111          0 |     struct ib_udata driver_udata
28112          0 |       const void * inbuf
28113          4 |       void * outbuf
28114          8 |       size_t inlen
28115         12 |       size_t outlen
28116         16 |     struct ib_udata ucore
28117         16 |       const void * inbuf
28118         20 |       void * outbuf
28119         24 |       size_t inlen
28120         28 |       size_t outlen
28121         32 |     struct ib_uverbs_file * ufile
28122         36 |     struct ib_ucontext * context
28123         40 |     struct ib_uobject * uobject
28124         44 |     unsigned long[2] attr_present
28125            | [sizeof=52, align=4]
28126

-----Original Message-----
From: Brian Cain <bcain@quicinc.com> 
Sent: Tuesday, August 13, 2024 10:27 PM
To: Brian Cain <bcain@quicinc.com>; Nathan Chancellor <nathan@kernel.org>; Gustavo A. R. Silva <gustavo@embeddedor.com>; Steven Walk (QUIC) <quic_walk@quicinc.com>
Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva <gustavoars@kernel.org>; llvm@lists.linux.dev; oe-kbuild-all@lists.linux.dev; LKML <linux-kernel@vger.kernel.org>; linux-hexagon@vger.kernel.org; Sid Manning <sidneym@quicinc.com>; Sundeep Kushwaha <sundeepk@quicinc.com>
Subject: RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...



> -----Original Message-----
> From: Brian Cain <bcain@quicinc.com>
> Sent: Tuesday, August 6, 2024 10:36 AM
> To: Nathan Chancellor <nathan@kernel.org>; Gustavo A. R. Silva 
> <gustavo@embeddedor.com>
> Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva 
> <gustavoars@kernel.org>; llvm@lists.linux.dev; 
> oe-kbuild-all@lists.linux.dev; LKML <linux-kernel@vger.kernel.org>; 
> linux-hexagon@vger.kernel.org; Sid Manning <sidneym@quicinc.com>; 
> Sundeep Kushwaha <sundeepk@quicinc.com>
> Subject: RE: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18]
> include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due 
> to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) 
> == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
> 
> WARNING: This email originated from outside of Qualcomm. Please be 
> wary of any links or attachments, and do not enable macros.
> 
> > -----Original Message-----
> > From: Nathan Chancellor <nathan@kernel.org>
> > Sent: Friday, August 2, 2024 5:20 PM
> > To: Gustavo A. R. Silva <gustavo@embeddedor.com>
> > Cc: kernel test robot <lkp@intel.com>; Gustavo A. R. Silva 
> > <gustavoars@kernel.org>; llvm@lists.linux.dev; 
> > oe-kbuild-all@lists.linux.dev; LKML <linux-kernel@vger.kernel.org>; 
> > Brian Cain <bcain@quicinc.com>; linux- hexagon@vger.kernel.org
> > Subject: Re: [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18]
> > include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed 
> > due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, 
> > attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb...
> >
> > WARNING: This email originated from outside of Qualcomm. Please be 
> > wary
> of
> > any links or attachments, and do not enable macros.
> >
> > On Thu, Aug 01, 2024 at 04:35:59PM -0600, Gustavo A. R. Silva wrote:
> > >
> > >
> > > On 01/08/24 16:14, Nathan Chancellor wrote:
> > > > On Thu, Aug 01, 2024 at 02:17:50PM -0600, Gustavo A. R. Silva wrote:
> > > > >
> > > > >
> > > > > On 01/08/24 13:08, Nathan Chancellor wrote:
> > > > > > On Thu, Aug 01, 2024 at 06:47:58AM -0600, Gustavo A. R. Silva wrote:
> > > > > > >
> > > > > > >
> > > > > > > On 01/08/24 05:35, kernel test robot wrote:
> > > > > > > > tree:
> > https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git
> > testing/wfamnae-next20240729-cbc-2
> > > > > > > > head:   df15c862c1b93b6e1f6c90b0d7971f7a6ad66751
> > > > > > > > commit: e7cd9f429a852fb7e37a706c7d08fc36e7863e06 [11/18]
> > RDMA/uverbs: Use static_assert() to check struct sizes
> > > > > > > > config: hexagon-randconfig-001-20240801
> > (https://download.01.org/0day-
> ci/archive/20240801/202408011956.wscyBwq6-
> > lkp@intel.com/config)
> > > > > > > > compiler: clang version 20.0.0git 
> > > > > > > > (https://github.com/llvm/llvm-
> > project 430b90f04533b099d788db2668176038be38c53b)
> > > > > > >
> > > > > > >
> > > > > > > Clang 20.0.0?? (thinkingface)
> > > > > >
> > > > > > Indeed, Clang 19 branched and main is now 20 :)
> > > > > >
> > > > > > https://github.com/llvm/llvm-
> > project/commit/8f701b5df0adb3a2960d78ca2ad9cf53f39ba2fe
> > > > >
> > > > > Yeah, but is that a stable release?
> > > >
> > > > No, but the Intel folks have tested tip of tree LLVM against the 
> > > > kernel for us for a few years now to try and catch issues such as this.
> > >
> > > Oh, I see, fine. :)
> > >
> > > >
> > > > > BTW, I don't see GCC reporting the same problem below:
> > > >
> > > > Hexagon does not have a GCC backend anymore so it is not going 
> > > > to be possible to do an exact A/B comparison with this configuration but...
> > > >
> > > > > > > > > > include/rdma/uverbs_ioctl.h:643:15: error: static 
> > > > > > > > > > assertion
> failed
> > due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, 
> > attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member 
> > likely outside of
> > struct_group_tagged()
> > > > > > > >         643 | static_assert(offsetof(struct 
> > > > > > > > uverbs_attr_bundle, attrs)
> ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >             |
> >
> ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >         644 |               "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/stddef.h:16:32: note: expanded from 
> > > > > > > > macro
> > 'offsetof'
> > > > > > > >          16 | #define offsetof(TYPE, MEMBER)
> __builtin_offsetof(TYPE,
> > MEMBER)
> > > > > > > >             |                                 ^
> > > > > > > >       include/linux/build_bug.h:77:50: note: expanded 
> > > > > > > > from macro
> > 'static_assert'
> > > > > > > >          77 | #define static_assert(expr, ...) 
> > > > > > > > __static_assert(expr,
> > ##__VA_ARGS__, #expr)
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:78:56: note: expanded 
> > > > > > > > from macro
> > '__static_assert'
> > > > > > > >          78 | #define __static_assert(expr, msg, ...) 
> > > > > > > > _Static_assert(expr,
> > msg)
> > > > > > > >             |                                                        ^~~~
> > > > > > > >       include/rdma/uverbs_ioctl.h:643:58: note: 
> > > > > > > > expression evaluates
> > to '56 == 52'
> > > >
> > > > This seems to give some indication that perhaps there may be 
> > > > some architecture specific here with padding maybe? I seem to 
> > > > recall ARM OABI having something similar. Adding the Hexagon 
> > > > folks/list to get some more clarification. Full warning and context:
> > > >
> > > > https://lore.kernel.org/202408011956.wscyBwq6-lkp@intel.com/
> > > >
> 
> There might be hexagon-specific padding requirements, but not ones 
> that I've stumbled across before.  I've added Sundeep from the 
> compiler team who may be able to help.

Steve suggested I try dumping the record layouts using clang's "-fdump-record-layouts".  I did so using the clang 18.1.8 binary from https://github.com/llvm/llvm-project/releases/tag/llvmorg-18.1.8 -- I thought it was reasonable to use this older release because it still generates the static assertion.  But I can repeat it with clang-20 built from the bot's cited commit if preferred.

Steve - can you give any advice about the compiler's behavior wrt this struct layout and the assertion?

I've attached the unabridged output from clang with "-fdump-record-layouts".  Here's an excerpt:

*** Dumping AST Record Layout
         0 | struct uverbs_attr_bundle_hdr
         0 |   struct ib_udata driver_udata
         0 |     const void * inbuf
         4 |     void * outbuf
         8 |     size_t inlen
        12 |     size_t outlen
        16 |   struct ib_udata ucore
        16 |     const void * inbuf
        20 |     void * outbuf
        24 |     size_t inlen
        28 |     size_t outlen
        32 |   struct ib_uverbs_file * ufile
        36 |   struct ib_ucontext * context
        40 |   struct ib_uobject * uobject
        44 |   unsigned long[2] attr_present
           | [sizeof=52, align=4]

*** Dumping AST Record Layout
         0 | union uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2)
         0 |   struct uverbs_attr_bundle::(anonymous at ../include/rdma/uverbs_ioctl.h:633:2) 
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
        28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
         0 |   struct uverbs_attr_bundle_hdr hdr
         0 |     struct ib_udata driver_udata
         0 |       const void * inbuf
         4 |       void * outbuf
         8 |       size_t inlen
        12 |       size_t outlen
        16 |     struct ib_udata ucore
        16 |       const void * inbuf
        20 |       void * outbuf
        24 |       size_t inlen
    In file included from ../drivers/infiniband/core/ib_core_uverbs.c:8:
In file included from ../drivers/infiniband/core/uverbs.h:49:
In file included from ../include/rdma/uverbs_std_types.h:10:
../include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged()
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
   16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
      |                                 ^
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
../include/rdma/uverbs_ioctl.h:643:58: note: expression evaluates to '56 == 52'
  643 | static_assert(offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr),
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644 |               "struct member likely outside of struct_group_tagged()");
      |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:77:50: note: expanded from macro 'static_assert'
   77 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
      |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/build_bug.h:78:56: note: expanded from macro '__static_assert'
   78 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
      |                                                        ^~~~
    28 |       size_t outlen
        32 |     struct ib_uverbs_file * ufile
        36 |     struct ib_ucontext * context
        40 |     struct ib_uobject * uobject
        44 |     unsigned long[2] attr_present
           | [sizeof=52, align=4]

I've also attached the "clang -cc1" invocation and preprocessed C output for reference.

> > > > The problematic section preprocessed since sometimes the macros 
> > > > obfuscate things:
> > > >
> > > > struct uverbs_attr_bundle {
> > > >          union {
> > > >                  struct {
> > > >                          struct ib_udata driver_udata;
> > > >                          struct ib_udata ucore;
> > > >                          struct ib_uverbs_file *ufile;
> > > >                          struct ib_ucontext *context;
> > > >                          struct ib_uobject *uobject;
> > > >                          unsigned long
> attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> > +
> > > >                                                       ((sizeof(long) * 8)) - 1) /
> > > >                                                      ((sizeof(long) * 8)))];
> > > >                  };
> > > >                  struct uverbs_attr_bundle_hdr {
> > > >                          struct ib_udata driver_udata;
> > > >                          struct ib_udata ucore;
> > > >                          struct ib_uverbs_file *ufile;
> > > >                          struct ib_ucontext *context;
> > > >                          struct ib_uobject *uobject;
> > > >                          unsigned long
> attr_present[(((UVERBS_API_ATTR_BKEY_LEN)
> > +
> > > >                                                       ((sizeof(long) * 8)) - 1) /
> > > >                                                      ((sizeof(long) * 8)))];
> > > >                  } hdr;
> > > >          };
> > > >
> > > >          struct uverbs_attr attrs[]; }; 
> > > > _Static_assert(__builtin_offsetof(struct uverbs_attr_bundle, attrs) ==
> > > >                         sizeof(struct uverbs_attr_bundle_hdr),
> > > >                 "struct member likely outside of 
> > > > struct_group_tagged()");
> > > >
> > > > FWIW, I see this with all versions of Clang that the kernel 
> > > > supports with this configuration.
> > >
> > > I don't have access to a Clang compiler right now; I wonder if you 
> > > could help me get the output of this command:
> > >
> > > pahole -C uverbs_attr_bundle drivers/infiniband/core/rdma_core.o
> >
> > We disabled CONFIG_DEBUG_INFO_BTF for Hexagon because elfutils does
> not
> > support Hexagon relocations but this is built-in for this 
> > configuration so I removed that limitation and ended up with:
> >
> > $ pahole -C uverbs_attr_bundle vmlinux struct uverbs_attr_bundle {
> >         union {
> >                 struct {
> >                         struct ib_udata driver_udata;    /*     0    16 */
> >                         struct ib_udata ucore;           /*    16    16 */
> >                         struct ib_uverbs_file * ufile;   /*    32     4 */
> >                         struct ib_ucontext * context;    /*    36     4 */
> >                         struct ib_uobject * uobject;     /*    40     4 */
> >                         unsigned long attr_present[2];   /*    44     8 */
> >                 };                                       /*     0    52 */
> >                 struct uverbs_attr_bundle_hdr hdr;       /*     0    52 */
> >         };                                               /*     0    52 */
> >
> >         /* XXX 4 bytes hole, try to pack */
> >         union {
> >                 struct {
> >                         struct ib_udata    driver_udata;         /*     0    16 */
> >                         struct ib_udata    ucore;                /*    16    16 */
> >                         struct ib_uverbs_file * ufile;           /*    32     4 */
> >                         struct ib_ucontext * context;            /*    36     4 */
> >                         struct ib_uobject * uobject;             /*    40     4 */
> >                         unsigned long      attr_present[2];      /*    44     8 */
> >                 };                                               /*     0    52 */
> >                 struct uverbs_attr_bundle_hdr hdr;               /*     0    52 */
> >         };
> >
> >
> >         struct uverbs_attr         attrs[];              /*    56     0 */
> >
> >         /* size: 56, cachelines: 1, members: 2 */
> >         /* sum members: 52, holes: 1, sum holes: 4 */
> >         /* last cacheline: 56 bytes */ };
> >
> > If you want any other information or want me to test anything, I am 
> > more than happy to do so.
> >
> > Cheers,
> > Nathan
> >
> > > > > > > >         643 | static_assert(offsetof(struct 
> > > > > > > > uverbs_attr_bundle, attrs)
> ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >             |
> >
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >         644 |               "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:77:50: note: expanded 
> > > > > > > > from macro
> > 'static_assert'
> > > > > > > >          77 | #define static_assert(expr, ...) 
> > > > > > > > __static_assert(expr,
> > ##__VA_ARGS__, #expr)
> > > > > > > >             |
> > ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > > >       include/linux/build_bug.h:78:56: note: expanded 
> > > > > > > > from macro
> > '__static_assert'
> > > > > > > >          78 | #define __static_assert(expr, msg, ...) 
> > > > > > > > _Static_assert(expr,
> > msg)
> > > > > > > >             |                                                        ^~~~
> > > > > > > >       7 warnings and 1 error generated.
> > > > > > > >
> > > > > > > >
> > > > > > > > vim +643 include/rdma/uverbs_ioctl.h
> > > > > > > >
> > > > > > > >       630
> > > > > > > >       631   struct uverbs_attr_bundle {
> > > > > > > >       632           /* New members MUST be added within the
> > struct_group() macro below. */
> > > > > > > >       633           struct_group_tagged(uverbs_attr_bundle_hdr, hdr,
> > > > > > > >       634                   struct ib_udata driver_udata;
> > > > > > > >       635                   struct ib_udata ucore;
> > > > > > > >       636                   struct ib_uverbs_file *ufile;
> > > > > > > >       637                   struct ib_ucontext *context;
> > > > > > > >       638                   struct ib_uobject *uobject;
> > > > > > > >       639                   DECLARE_BITMAP(attr_present,
> > UVERBS_API_ATTR_BKEY_LEN);
> > > > > > > >       640           );
> > > > > > > >       641           struct uverbs_attr attrs[];
> > > > > > > >       642   };
> > > > > > > >     > 643   static_assert(offsetof(struct uverbs_attr_bundle, attrs) ==
> > sizeof(struct uverbs_attr_bundle_hdr),
> > > > > > > >       644                 "struct member likely outside of
> > struct_group_tagged()");
> > > > > > > >       645
> > > > > > > >
> > > > > > >
> > > > >
> > > > > Thanks
> > > > > --
> > > > > Gustavo



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-08-14 16:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-01 11:35 [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct member likely outside of struct_group_tagged() kernel test robot
2024-08-01 12:47 ` Gustavo A. R. Silva
2024-08-01 19:08   ` Nathan Chancellor
2024-08-01 20:17     ` Gustavo A. R. Silva
2024-08-01 22:14       ` Nathan Chancellor
2024-08-01 22:35         ` Gustavo A. R. Silva
2024-08-02 22:19           ` Nathan Chancellor
2024-08-06 15:36             ` [gustavoars:testing/wfamnae-next20240729-cbc-2 11/18] include/rdma/uverbs_ioctl.h:643:15: error: static assertion failed due to requirement '__builtin_offsetof(struct uverbs_attr_bundle, attrs) == sizeof(struct uverbs_attr_bundle_hdr)': struct memb Brian Cain
2024-08-14  3:27               ` Brian Cain
2024-08-14 16:30                 ` Steven Walk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox